I0525 11:48:15.467800 17 e2e.go:129] Starting e2e run "dbbe1535-a65c-4087-8756-964be54c461f" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621943293 - Will randomize all specs Will run 13 of 5771 specs May 25 11:48:15.484: INFO: >>> kubeConfig: /root/.kube/config May 25 11:48:15.488: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 11:48:15.518: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 11:48:15.571: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 11:48:15.571: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 11:48:15.571: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 11:48:15.585: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'create-loop-devs' (0 seconds elapsed) May 25 11:48:15.585: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 11:48:15.585: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds' (0 seconds elapsed) May 25 11:48:15.585: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 11:48:15.585: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'tune-sysctls' (0 seconds elapsed) May 25 11:48:15.585: INFO: e2e test version: v1.21.1 May 25 11:48:15.587: INFO: kube-apiserver version: v1.21.1 May 25 11:48:15.587: INFO: >>> kubeConfig: /root/.kube/config May 25 11:48:15.598: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:48:15.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0525 11:48:15.651780 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 25 11:48:15.651: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 25 11:48:15.661: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:48:15.664: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:48:15.673: INFO: Waiting for terminating namespaces to be deleted... May 25 11:48:15.677: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:48:15.685: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container coredns ready: true, restart count 0 May 25 11:48:15.685: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container coredns ready: true, restart count 0 May 25 11:48:15.685: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container loopdev ready: true, restart count 0 May 25 11:48:15.685: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:48:15.685: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container kube-multus ready: true, restart count 0 May 25 11:48:15.685: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:48:15.685: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container setsysctls ready: true, restart count 0 May 25 11:48:15.685: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:48:15.685: INFO: Container speaker ready: true, restart count 0 May 25 11:48:15.685: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:48:15.694: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container loopdev ready: true, restart count 0 May 25 11:48:15.694: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:48:15.694: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container kube-multus ready: true, restart count 1 May 25 11:48:15.694: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:48:15.694: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container setsysctls ready: true, restart count 0 May 25 11:48:15.694: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:48:15.694: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:48:15.694: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container controller ready: true, restart count 0 May 25 11:48:15.694: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container speaker ready: true, restart count 0 May 25 11:48:15.694: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container contour ready: true, restart count 0 May 25 11:48:15.694: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:48:15.694: INFO: Container contour ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b988ebd8-9764-42cf-bbb8-670eeb841a53 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 172.18.0.4 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 172.18.0.4 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-b988ebd8-9764-42cf-bbb8-670eeb841a53 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b988ebd8-9764-42cf-bbb8-670eeb841a53 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:48:23.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-14" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.199 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":1,"skipped":1009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:404 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:48:23.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 25 11:48:23.849: INFO: Waiting up to 1m0s for all nodes to be ready May 25 11:49:23.901: INFO: Waiting for terminating namespaces to be deleted... May 25 11:49:23.904: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 11:49:23.919: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 11:49:23.919: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 11:49:23.934: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:49:23.934: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:49:23.934: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:23.934: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:49:23.934: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:390 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:404 May 25 11:49:28.035: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:49:28.035: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:49:28.035: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.035: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.036: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.036: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.036: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:49:28.036: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:49:28.036: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:49:28.040: INFO: Waiting for running... May 25 11:49:33.100: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:49:38.170: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Node: v1.21-worker, totalRequestedCPUResource: 395200, cpuAllocatableMil: 88000, cpuFraction: 1 May 25 11:49:38.170: INFO: Node: v1.21-worker, totalRequestedMemResource: 302710374400, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:49:38.170: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Pod for on the node: f2028604-b2ca-47d2-bb10-9bfafc2d915e-0, Cpu: 43900, Mem: 33622835200 May 25 11:49:38.170: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 526900, cpuAllocatableMil: 88000, cpuFraction: 1 May 25 11:49:38.170: INFO: Node: v1.21-worker2, totalRequestedMemResource: 403578880000, memAllocatableVal: 67430219776, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "v1.21-worker" STEP: Verifying if the test-pod lands on node "v1.21-worker2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:398 STEP: removing the label kubernetes.io/e2e-pts-score off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:49:56.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8752" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:92.449 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:386 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:404 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":2,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:49:56.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 25 11:49:56.320: INFO: Waiting up to 1m0s for all nodes to be ready May 25 11:50:56.366: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node v1.21-worker. STEP: Apply 10 fake resource to node v1.21-worker2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:51:26.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3523" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:90.393 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":3,"skipped":1543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:327 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:51:26.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 25 11:51:26.699: INFO: Waiting up to 1m0s for all nodes to be ready May 25 11:52:26.743: INFO: Waiting for terminating namespaces to be deleted... May 25 11:52:26.746: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 11:52:26.761: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 11:52:26.761: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 11:52:26.775: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:52:26.775: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:52:26.775: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.775: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.776: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:52:26.776: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:327 May 25 11:52:26.790: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:52:26.790: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:52:26.790: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:52:26.790: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:52:26.790: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:52:26.801: INFO: Waiting for running... May 25 11:52:31.860: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:52:36.929: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Node: v1.21-worker, totalRequestedCPUResource: 395200, cpuAllocatableMil: 88000, cpuFraction: 1 May 25 11:52:36.929: INFO: Node: v1.21-worker, totalRequestedMemResource: 302710374400, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:52:36.929: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Pod for on the node: 28e73fc7-25e4-4a68-8e01-4e291b602172-0, Cpu: 43900, Mem: 33622835200 May 25 11:52:36.929: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 526900, cpuAllocatableMil: 88000, cpuFraction: 1 May 25 11:52:36.929: INFO: Node: v1.21-worker2, totalRequestedMemResource: 403578880000, memAllocatableVal: 67430219776, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-80b6c258-841a-4cc4-9b69=testing-taint-value-60f70de8-1f9d-4143-a18c-0a84faf23d5c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-077e66bc-1848-4caa-81cf=testing-taint-value-febfb1d1-6650-4b33-9ebe-44971f8dc41f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-256d789c-9615-48bc-a56a=testing-taint-value-6270e65b-428b-45e9-b7d1-93f2c4e07c63:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c06cdc6f-51c5-4554-9f44=testing-taint-value-65a4277e-bf2e-4417-be40-d0f321b957b6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-00a46af3-c285-4e27-bbb8=testing-taint-value-5bac858e-07ab-41f6-924e-6a168d7403d3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fabb6130-639d-45de-805b=testing-taint-value-41720f8e-189c-4f9c-8c45-43a11b496ee5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-add66d20-fba1-4882-8101=testing-taint-value-c02f775f-9b8e-405a-9cc2-bffda9343b05:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6abbea33-191b-4d86-b3b6=testing-taint-value-9b3caa5f-714f-4432-89d0-8a2af48d0c77:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d1c8fb5e-03e3-42d4-8cde=testing-taint-value-02ba982f-ddd4-4522-acfa-fd742ebcfbde:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f14f24f9-2008-4f3e-80f7=testing-taint-value-44e886ca-3258-42b0-adbc-2fdedeb40ab5:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6ce229d3-2058-41fe-8582=testing-taint-value-49c46348-e5f0-46bf-b4d2-fdcc4cd77d32:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-66ae1eb5-1b67-447b-84ef=testing-taint-value-f35e235f-5017-40bf-90d9-918e84fb45db:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7f827b38-fded-48e7-afb5=testing-taint-value-4e38a729-68ca-43c7-98fb-d9762aec88b7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ae10662c-3586-479b-8191=testing-taint-value-7da4bac5-95b4-40df-a6f8-2041de7c24d9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0609a0bf-90d7-4771-9ea5=testing-taint-value-c21a3be3-5b11-4882-bf67-f7fb328bc84a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8eea23e2-18fd-4c2d-9942=testing-taint-value-b972a76f-f9f3-4af3-9f65-ad97f2ae0b6c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2435f283-553d-4bf6-b6a0=testing-taint-value-10c9532f-de27-405f-b783-c0d3b0bb99b9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c70e5f48-f13d-42f1-97ac=testing-taint-value-1c91a2d0-3e15-4305-a8cf-23c16e8f9432:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ba2f5bcb-251c-41b7-8bfe=testing-taint-value-70c2f293-a56b-4a72-8ec5-6f2a49e25334:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7fc65c62-99ef-4e2b-8cfe=testing-taint-value-e65ac3f7-2934-4dce-b12c-7420a8b35bff:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6ce229d3-2058-41fe-8582=testing-taint-value-49c46348-e5f0-46bf-b4d2-fdcc4cd77d32:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-66ae1eb5-1b67-447b-84ef=testing-taint-value-f35e235f-5017-40bf-90d9-918e84fb45db:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7f827b38-fded-48e7-afb5=testing-taint-value-4e38a729-68ca-43c7-98fb-d9762aec88b7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ae10662c-3586-479b-8191=testing-taint-value-7da4bac5-95b4-40df-a6f8-2041de7c24d9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0609a0bf-90d7-4771-9ea5=testing-taint-value-c21a3be3-5b11-4882-bf67-f7fb328bc84a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8eea23e2-18fd-4c2d-9942=testing-taint-value-b972a76f-f9f3-4af3-9f65-ad97f2ae0b6c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2435f283-553d-4bf6-b6a0=testing-taint-value-10c9532f-de27-405f-b783-c0d3b0bb99b9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c70e5f48-f13d-42f1-97ac=testing-taint-value-1c91a2d0-3e15-4305-a8cf-23c16e8f9432:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ba2f5bcb-251c-41b7-8bfe=testing-taint-value-70c2f293-a56b-4a72-8ec5-6f2a49e25334:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7fc65c62-99ef-4e2b-8cfe=testing-taint-value-e65ac3f7-2934-4dce-b12c-7420a8b35bff:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-80b6c258-841a-4cc4-9b69=testing-taint-value-60f70de8-1f9d-4143-a18c-0a84faf23d5c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-077e66bc-1848-4caa-81cf=testing-taint-value-febfb1d1-6650-4b33-9ebe-44971f8dc41f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-256d789c-9615-48bc-a56a=testing-taint-value-6270e65b-428b-45e9-b7d1-93f2c4e07c63:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c06cdc6f-51c5-4554-9f44=testing-taint-value-65a4277e-bf2e-4417-be40-d0f321b957b6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-00a46af3-c285-4e27-bbb8=testing-taint-value-5bac858e-07ab-41f6-924e-6a168d7403d3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fabb6130-639d-45de-805b=testing-taint-value-41720f8e-189c-4f9c-8c45-43a11b496ee5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-add66d20-fba1-4882-8101=testing-taint-value-c02f775f-9b8e-405a-9cc2-bffda9343b05:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6abbea33-191b-4d86-b3b6=testing-taint-value-9b3caa5f-714f-4432-89d0-8a2af48d0c77:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d1c8fb5e-03e3-42d4-8cde=testing-taint-value-02ba982f-ddd4-4522-acfa-fd742ebcfbde:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f14f24f9-2008-4f3e-80f7=testing-taint-value-44e886ca-3258-42b0-adbc-2fdedeb40ab5:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:52:46.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2226" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:80.118 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:327 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":4,"skipped":1831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:52:46.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:52:46.819: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:52:46.828: INFO: Waiting for terminating namespaces to be deleted... May 25 11:52:46.831: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:52:46.840: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container coredns ready: true, restart count 0 May 25 11:52:46.840: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container coredns ready: true, restart count 0 May 25 11:52:46.840: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container loopdev ready: true, restart count 0 May 25 11:52:46.840: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:52:46.840: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container kube-multus ready: true, restart count 0 May 25 11:52:46.840: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:52:46.840: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container setsysctls ready: true, restart count 0 May 25 11:52:46.840: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container speaker ready: true, restart count 0 May 25 11:52:46.840: INFO: with-tolerations from sched-priority-2226 started at 2021-05-25 11:52:37 +0000 UTC (1 container statuses recorded) May 25 11:52:46.840: INFO: Container with-tolerations ready: true, restart count 0 May 25 11:52:46.840: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:52:46.849: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container loopdev ready: true, restart count 0 May 25 11:52:46.849: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:52:46.849: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container kube-multus ready: true, restart count 1 May 25 11:52:46.849: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:52:46.849: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container setsysctls ready: true, restart count 0 May 25 11:52:46.849: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:52:46.849: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:52:46.849: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container controller ready: true, restart count 0 May 25 11:52:46.849: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container speaker ready: true, restart count 0 May 25 11:52:46.849: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container contour ready: true, restart count 0 May 25 11:52:46.849: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:52:46.849: INFO: Container contour ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 25 11:52:46.884: INFO: Pod coredns-558bd4d5db-hdfz5 requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod coredns-558bd4d5db-k2mkk requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod create-loop-devs-lfj6m requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod create-loop-devs-mtgxk requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod kindnet-5xbgn requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod kindnet-64qsq requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod kube-multus-ds-chmxd requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod kube-multus-ds-p7tvf requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod kube-proxy-pjm2c requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod kube-proxy-wg4wq requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod tune-sysctls-b7rgm requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod tune-sysctls-f6hsg requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod dashboard-metrics-scraper-856586f554-l66m5 requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod kubernetes-dashboard-78c79f97b4-k777m requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod controller-675995489c-x7gj2 requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod speaker-lw6f6 requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod speaker-thr6r requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Pod contour-74948c9879-n2262 requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod contour-74948c9879-w22pr requesting local ephemeral resource =0 on Node v1.21-worker2 May 25 11:52:46.884: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node v1.21-worker May 25 11:52:46.884: INFO: Using pod capacity: 47063248896 May 25 11:52:46.884: INFO: Node: v1.21-worker has local ephemeral resource allocatable: 470632488960 May 25 11:52:46.884: INFO: Node: v1.21-worker2 has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 25 11:52:46.966: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16824d14a6afebd4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-0 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16824d14de2679e9], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.13/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16824d14edb2b55c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16824d14eef59caa], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16824d14f710e4f7], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16824d14a72408db], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-1 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16824d14ecf92a40], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.26/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16824d14f8981c06], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16824d14f9e0e044], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16824d15062df01b], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16824d14a974f2e7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-10 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16824d153a63f59a], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.32/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16824d1547fae5aa], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16824d15490f846f], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16824d15529acf25], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16824d14a9e13a10], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-11 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16824d15174c7a2d], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.17/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16824d1527a5870b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16824d152994813f], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16824d1530ca9ba0], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16824d14a9e8715f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-12 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16824d153a5d9ce1], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.21/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16824d1547dfff18], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16824d1548e585b2], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16824d1552b5e1cf], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16824d14a9ef3630], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-13 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16824d1522945a81], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.30/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16824d152d636115], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16824d152e83c417], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16824d153aeb82ba], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16824d14a9f48ed4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-14 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16824d150081b81f], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.16/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16824d150af518f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16824d150c2b7ede], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16824d151a9ead8a], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16824d14aa1a5910], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-15 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16824d1519080b03], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.29/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16824d152811b418], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16824d1529868235], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16824d1530aedf9e], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16824d14aa4e8371], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-16 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16824d1522443ceb], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.19/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16824d152d22fd46], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16824d152e5a8f27], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16824d153a9e9b53], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16824d14aa877d72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-17 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16824d153a5c2835], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.20/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16824d1547063b28], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16824d154884902a], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16824d1552ce05c6], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16824d14aabeadaf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-18 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16824d1549d107b6], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.22/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16824d1554694aac], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16824d155585c35b], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16824d155ec59254], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16824d14aaf08310], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-19 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16824d151a0f184e], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.18/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16824d1527951129], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16824d15299f2558], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16824d1530716fad], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16824d14a74e6040], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-2 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16824d14de00b4f0], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.25/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16824d14ed5a34ad], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16824d14eeb527b5], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16824d14f70bae94], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16824d14a792b190], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-3 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16824d14df98d618], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.24/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16824d14ed5a1ba8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16824d14eee3f2b0], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16824d14f75ad1d0], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16824d14a7d0b235], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-4 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16824d14fe488cf3], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.27/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16824d150adc8cc8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16824d150c2c602f], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16824d151a9aa496], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16824d14a81c5f97], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-5 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16824d1549c21b67], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.33/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16824d155443e249], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16824d15558578b4], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16824d155ec07f3e], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16824d14a862dc6c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-6 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16824d14ed4a1e50], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.15/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16824d14f8abbe0c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16824d14f9e4b301], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16824d150636bae2], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16824d14a882dc28], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-7 to v1.21-worker2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16824d14ddfc1dc4], Reason = [AddedInterface], Message = [Add eth0 [10.244.2.14/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16824d14eaef9339], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16824d14ec3418fd], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16824d14f7112a03], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16824d14a8b82a23], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-8 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16824d151760c49a], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.28/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16824d1528261b40], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16824d1529a08d25], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16824d1530e508bc], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16824d14a8ea11cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6935/overcommit-9 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16824d153a643260], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.31/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16824d1546fde865], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16824d15487c33ac], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16824d15503a2c7b], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16824d1702f3f4a2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:52:58.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6935" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":5,"skipped":1999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:52:58.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:52:58.104: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:52:58.112: INFO: Waiting for terminating namespaces to be deleted... May 25 11:52:58.116: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:52:58.128: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container coredns ready: true, restart count 0 May 25 11:52:58.128: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container coredns ready: true, restart count 0 May 25 11:52:58.128: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container loopdev ready: true, restart count 0 May 25 11:52:58.128: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:52:58.128: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container kube-multus ready: true, restart count 0 May 25 11:52:58.128: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:52:58.128: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container setsysctls ready: true, restart count 0 May 25 11:52:58.128: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container speaker ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-1 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-1 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-10 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-10 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-13 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-13 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-15 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-15 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-2 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-2 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-3 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-3 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-4 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-4 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-5 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-5 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-8 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-8 ready: true, restart count 0 May 25 11:52:58.128: INFO: overcommit-9 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.128: INFO: Container overcommit-9 ready: true, restart count 0 May 25 11:52:58.128: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:52:58.138: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:52:58.138: INFO: Container loopdev ready: true, restart count 0 May 25 11:52:58.138: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:52:58.139: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container kube-multus ready: true, restart count 1 May 25 11:52:58.139: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:52:58.139: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container setsysctls ready: true, restart count 0 May 25 11:52:58.139: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:52:58.139: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:52:58.139: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container controller ready: true, restart count 0 May 25 11:52:58.139: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container speaker ready: true, restart count 0 May 25 11:52:58.139: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container contour ready: true, restart count 0 May 25 11:52:58.139: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container contour ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-0 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-0 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-11 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-11 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-12 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-12 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-14 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-14 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-16 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-16 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-17 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-17 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-18 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-18 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-19 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-19 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-6 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-6 ready: true, restart count 0 May 25 11:52:58.139: INFO: overcommit-7 from sched-pred-6935 started at 2021-05-25 11:52:46 +0000 UTC (1 container statuses recorded) May 25 11:52:58.139: INFO: Container overcommit-7 ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d19a4d4e5d3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d19e5f618ae], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d1a5daefaff], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2011/filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10 to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d1a7b2d7e73], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.35/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d1a87a93e1c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d1a88de074a], Reason = [Created], Message = [Created container filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10.16824d1a90bd0a31], Reason = [Started], Message = [Started container filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10] STEP: Considering event: Type = [Normal], Name = [without-label.16824d18b42e588e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2011/without-label to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [without-label.16824d18d757fde4], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.34/24]] STEP: Considering event: Type = [Normal], Name = [without-label.16824d18edf90189], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.16824d19366c53db], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16824d193f7c1285], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16824d19a3a6acea], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod6a807bfa-1ddf-4b3d-a7da-c7ad164ec559.16824d1b0bbd7a6d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:53:15.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2011" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.328 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":6,"skipped":2299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:53:15.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:53:15.433: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:53:15.442: INFO: Waiting for terminating namespaces to be deleted... May 25 11:53:15.445: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:53:15.454: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container coredns ready: true, restart count 0 May 25 11:53:15.454: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container coredns ready: true, restart count 0 May 25 11:53:15.454: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:15.454: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:15.454: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container kube-multus ready: true, restart count 0 May 25 11:53:15.454: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:15.454: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:15.454: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container speaker ready: true, restart count 0 May 25 11:53:15.454: INFO: filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10 from sched-pred-2011 started at 2021-05-25 11:53:11 +0000 UTC (1 container statuses recorded) May 25 11:53:15.454: INFO: Container filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10 ready: true, restart count 0 May 25 11:53:15.454: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:53:15.462: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:15.462: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:15.462: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container kube-multus ready: true, restart count 1 May 25 11:53:15.462: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:15.462: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:15.462: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:53:15.462: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:53:15.462: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container controller ready: true, restart count 0 May 25 11:53:15.462: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container speaker ready: true, restart count 0 May 25 11:53:15.462: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container contour ready: true, restart count 0 May 25 11:53:15.462: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:53:15.462: INFO: Container contour ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7faaaa3e-47c3-4817-881b-3f12e3c98933=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-7d8d2b43-87db-4814-a136-2af3a128aad8 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-7d8d2b43-87db-4814-a136-2af3a128aad8 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-7d8d2b43-87db-4814-a136-2af3a128aad8 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7faaaa3e-47c3-4817-881b-3f12e3c98933=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:53:19.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3300" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":7,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:53:19.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:53:19.618: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:53:19.625: INFO: Waiting for terminating namespaces to be deleted... May 25 11:53:19.629: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:53:19.637: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container coredns ready: true, restart count 0 May 25 11:53:19.638: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container coredns ready: true, restart count 0 May 25 11:53:19.638: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:19.638: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:19.638: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container kube-multus ready: true, restart count 0 May 25 11:53:19.638: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:19.638: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:19.638: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container speaker ready: true, restart count 0 May 25 11:53:19.638: INFO: filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10 from sched-pred-2011 started at 2021-05-25 11:53:11 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container filler-pod-8d85f6c1-9aac-48dd-a950-6b1f065e3f10 ready: true, restart count 0 May 25 11:53:19.638: INFO: with-tolerations from sched-pred-3300 started at 2021-05-25 11:53:17 +0000 UTC (1 container statuses recorded) May 25 11:53:19.638: INFO: Container with-tolerations ready: true, restart count 0 May 25 11:53:19.638: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:53:19.646: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:19.646: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:19.646: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container kube-multus ready: true, restart count 1 May 25 11:53:19.646: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:19.646: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:19.646: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:53:19.646: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:53:19.646: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container controller ready: true, restart count 0 May 25 11:53:19.646: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container speaker ready: true, restart count 0 May 25 11:53:19.646: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container contour ready: true, restart count 0 May 25 11:53:19.646: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:53:19.646: INFO: Container contour ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-089f827f-8f26-480c-ab94-84ba4cc31e7a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-089f827f-8f26-480c-ab94-84ba4cc31e7a off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-089f827f-8f26-480c-ab94-84ba4cc31e7a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:53:23.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-716" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":8,"skipped":2411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:53:23.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:53:23.767: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:53:23.775: INFO: Waiting for terminating namespaces to be deleted... May 25 11:53:23.778: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:53:23.787: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container coredns ready: true, restart count 0 May 25 11:53:23.787: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container coredns ready: true, restart count 0 May 25 11:53:23.787: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:23.787: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:23.787: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container kube-multus ready: true, restart count 0 May 25 11:53:23.787: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:23.787: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:23.787: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container speaker ready: true, restart count 0 May 25 11:53:23.787: INFO: with-tolerations from sched-pred-3300 started at 2021-05-25 11:53:17 +0000 UTC (1 container statuses recorded) May 25 11:53:23.787: INFO: Container with-tolerations ready: true, restart count 0 May 25 11:53:23.787: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:53:23.797: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:23.797: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:23.797: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container kube-multus ready: true, restart count 1 May 25 11:53:23.797: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:23.797: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:23.797: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:53:23.797: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:53:23.797: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container controller ready: true, restart count 0 May 25 11:53:23.797: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container speaker ready: true, restart count 0 May 25 11:53:23.797: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container contour ready: true, restart count 0 May 25 11:53:23.797: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container contour ready: true, restart count 0 May 25 11:53:23.797: INFO: with-labels from sched-pred-716 started at 2021-05-25 11:53:21 +0000 UTC (1 container statuses recorded) May 25 11:53:23.797: INFO: Container with-labels ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d1ce5ab7-52dd-4114-9e11-27b1776690c0=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-c6f5bcf7-e8d4-4535-9ddf-8ddff60f9fc4 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d3ec55a12], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1153/without-toleration to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d5c2254b0], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.38/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d68e12078], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d6a08430e], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d71b58209], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1db6e86a25], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16824d1db8f62f77], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d1ce5ab7-52dd-4114-9e11-27b1776690c0: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16824d1db8f62f77], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d1ce5ab7-52dd-4114-9e11-27b1776690c0: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d3ec55a12], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1153/without-toleration to v1.21-worker] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d5c2254b0], Reason = [AddedInterface], Message = [Add eth0 [10.244.1.38/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d68e12078], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.4.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d6a08430e], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1d71b58209], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16824d1db6e86a25], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d1ce5ab7-52dd-4114-9e11-27b1776690c0=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16824d1e17df6e67], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1153/still-no-tolerations to v1.21-worker] STEP: removing the label kubernetes.io/e2e-label-key-c6f5bcf7-e8d4-4535-9ddf-8ddff60f9fc4 off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-c6f5bcf7-e8d4-4535-9ddf-8ddff60f9fc4 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d1ce5ab7-52dd-4114-9e11-27b1776690c0=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:53:27.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1153" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":9,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:53:27.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:53:27.972: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:53:27.981: INFO: Waiting for terminating namespaces to be deleted... May 25 11:53:27.984: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:53:27.993: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container coredns ready: true, restart count 0 May 25 11:53:27.993: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container coredns ready: true, restart count 0 May 25 11:53:27.993: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:27.993: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:27.993: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container kube-multus ready: true, restart count 0 May 25 11:53:27.993: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:27.993: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:27.993: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container speaker ready: true, restart count 0 May 25 11:53:27.993: INFO: still-no-tolerations from sched-pred-1153 started at 2021-05-25 11:53:27 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container still-no-tolerations ready: false, restart count 0 May 25 11:53:27.993: INFO: with-tolerations from sched-pred-3300 started at 2021-05-25 11:53:17 +0000 UTC (1 container statuses recorded) May 25 11:53:27.993: INFO: Container with-tolerations ready: false, restart count 0 May 25 11:53:27.993: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:53:28.001: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container loopdev ready: true, restart count 0 May 25 11:53:28.001: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:53:28.001: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container kube-multus ready: true, restart count 1 May 25 11:53:28.001: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:53:28.001: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container setsysctls ready: true, restart count 0 May 25 11:53:28.001: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:53:28.001: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:53:28.001: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container controller ready: true, restart count 0 May 25 11:53:28.001: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container speaker ready: true, restart count 0 May 25 11:53:28.001: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container contour ready: true, restart count 0 May 25 11:53:28.001: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container contour ready: true, restart count 0 May 25 11:53:28.001: INFO: with-labels from sched-pred-716 started at 2021-05-25 11:53:21 +0000 UTC (1 container statuses recorded) May 25 11:53:28.001: INFO: Container with-labels ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node v1.21-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node v1.21-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:53:36.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-647" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.185 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":10,"skipped":3816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:179 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:53:36.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 25 11:53:36.165: INFO: Waiting up to 1m0s for all nodes to be ready May 25 11:54:36.281: INFO: Waiting for terminating namespaces to be deleted... May 25 11:54:36.285: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 11:54:36.302: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 11:54:36.302: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 11:54:36.317: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:54:36.317: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:54:36.317: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:54:36.317: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:54:36.317: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:179 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 25 11:54:42.405: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:54:42.406: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:54:42.406: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:42.406: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:54:42.406: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:54:42.411: INFO: Waiting for running... May 25 11:54:47.469: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:54:52.791: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:54:52.791: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:54:52.791: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 25 11:54:52.791: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:54:52.791: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:55:07.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3133" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:90.897 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:179 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":11,"skipped":4282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:263 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:55:07.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:154 May 25 11:55:07.066: INFO: Waiting up to 1m0s for all nodes to be ready May 25 11:56:07.181: INFO: Waiting for terminating namespaces to be deleted... May 25 11:56:07.185: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 11:56:07.199: INFO: 21 / 21 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 11:56:07.199: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 11:56:07.214: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:56:07.214: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:56:07.214: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.214: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:56:07.214: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:263 May 25 11:56:07.229: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Node: v1.21-worker, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:56:07.229: INFO: Node: v1.21-worker, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:56:07.229: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Pod for on the node: envoy-lg6jb, Cpu: 200, Mem: 419430400 May 25 11:56:07.229: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 May 25 11:56:07.229: INFO: Node: v1.21-worker2, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 May 25 11:56:07.239: INFO: Waiting for running... May 25 11:56:12.299: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:56:17.368: INFO: ComputeCPUMemFraction for node: v1.21-worker May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Node: v1.21-worker, totalRequestedCPUResource: 395200, cpuAllocatableMil: 88000, cpuFraction: 1 May 25 11:56:17.368: INFO: Node: v1.21-worker, totalRequestedMemResource: 302710374400, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 25 11:56:17.368: INFO: ComputeCPUMemFraction for node: v1.21-worker2 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.368: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.369: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.369: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.369: INFO: Pod for on the node: 51ccb963-3bda-47fa-bacf-5e5cf3d07e8e-0, Cpu: 43900, Mem: 33622835200 May 25 11:56:17.369: INFO: Node: v1.21-worker2, totalRequestedCPUResource: 526900, cpuAllocatableMil: 88000, cpuFraction: 1 May 25 11:56:17.369: INFO: Node: v1.21-worker2, totalRequestedMemResource: 403578880000, memAllocatableVal: 67430219776, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-241 to 1 STEP: Verify the pods should not scheduled to the node: v1.21-worker STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-241, will wait for the garbage collector to delete the pods May 25 11:56:23.556: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.036788ms May 25 11:56:23.657: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.829321ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:56:45.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-241" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:151 • [SLOW TEST:98.453 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:263 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":12,"skipped":4656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 25 11:56:45.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 25 11:56:45.529: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 11:56:45.537: INFO: Waiting for terminating namespaces to be deleted... May 25 11:56:45.540: INFO: Logging pods the apiserver thinks is on node v1.21-worker before test May 25 11:56:45.549: INFO: coredns-558bd4d5db-hdfz5 from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container coredns ready: true, restart count 0 May 25 11:56:45.549: INFO: coredns-558bd4d5db-k2mkk from kube-system started at 2021-05-25 11:20:08 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container coredns ready: true, restart count 0 May 25 11:56:45.549: INFO: create-loop-devs-mtgxk from kube-system started at 2021-05-25 11:05:05 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container loopdev ready: true, restart count 0 May 25 11:56:45.549: INFO: kindnet-64qsq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:56:45.549: INFO: kube-multus-ds-p7tvf from kube-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container kube-multus ready: true, restart count 0 May 25 11:56:45.549: INFO: kube-proxy-pjm2c from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:56:45.549: INFO: tune-sysctls-f6hsg from kube-system started at 2021-05-25 11:04:35 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container setsysctls ready: true, restart count 0 May 25 11:56:45.549: INFO: speaker-thr6r from metallb-system started at 2021-05-25 11:04:45 +0000 UTC (1 container statuses recorded) May 25 11:56:45.549: INFO: Container speaker ready: true, restart count 0 May 25 11:56:45.549: INFO: Logging pods the apiserver thinks is on node v1.21-worker2 before test May 25 11:56:45.557: INFO: create-loop-devs-lfj6m from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container loopdev ready: true, restart count 0 May 25 11:56:45.557: INFO: kindnet-5xbgn from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container kindnet-cni ready: true, restart count 0 May 25 11:56:45.557: INFO: kube-multus-ds-chmxd from kube-system started at 2021-05-24 17:25:29 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container kube-multus ready: true, restart count 1 May 25 11:56:45.557: INFO: kube-proxy-wg4wq from kube-system started at 2021-05-24 17:24:25 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container kube-proxy ready: true, restart count 0 May 25 11:56:45.557: INFO: tune-sysctls-b7rgm from kube-system started at 2021-05-24 17:25:28 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container setsysctls ready: true, restart count 0 May 25 11:56:45.557: INFO: dashboard-metrics-scraper-856586f554-l66m5 from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container dashboard-metrics-scraper ready: true, restart count 0 May 25 11:56:45.557: INFO: kubernetes-dashboard-78c79f97b4-k777m from kubernetes-dashboard started at 2021-05-24 17:25:32 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container kubernetes-dashboard ready: true, restart count 0 May 25 11:56:45.557: INFO: controller-675995489c-x7gj2 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container controller ready: true, restart count 0 May 25 11:56:45.557: INFO: speaker-lw6f6 from metallb-system started at 2021-05-24 17:25:30 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container speaker ready: true, restart count 0 May 25 11:56:45.557: INFO: contour-74948c9879-n2262 from projectcontour started at 2021-05-24 17:25:31 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container contour ready: true, restart count 0 May 25 11:56:45.557: INFO: contour-74948c9879-w22pr from projectcontour started at 2021-05-24 19:58:28 +0000 UTC (1 container statuses recorded) May 25 11:56:45.557: INFO: Container contour ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16824d4c397b4971], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 25 11:56:46.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9730" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":13,"skipped":5119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 25 11:56:46.612: INFO: Running AfterSuite actions on all nodes May 25 11:56:46.612: INFO: Running AfterSuite actions on node 1 May 25 11:56:46.612: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5758,"failed":0} Ran 13 of 5771 Specs in 511.133 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5758 Skipped PASS Ginkgo ran 1 suite in 8m32.680591235s Test Suite Passed