I1023 04:30:43.630353 21 e2e.go:129] Starting e2e run "9f021672-d9d1-4daa-bfdb-6a059f84e86f" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634963442 - Will randomize all specs Will run 13 of 5770 specs Oct 23 04:30:43.645: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:30:43.650: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 04:30:43.680: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 04:30:43.746: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 04:30:43.746: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 04:30:43.746: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 04:30:43.746: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 04:30:43.746: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 04:30:43.762: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 04:30:43.762: INFO: e2e test version: v1.21.5 Oct 23 04:30:43.763: INFO: kube-apiserver version: v1.21.1 Oct 23 04:30:43.763: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:30:43.770: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:30:43.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1023 04:30:43.805186 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:30:43.805: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:30:43.808: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:30:43.810: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:30:43.818: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:30:43.820: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:30:43.831: INFO: startup-script from conntrack-5350 started at 2021-10-23 04:29:10 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container startup-script ready: true, restart count 0 Oct 23 04:30:43.831: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:30:43.831: INFO: Container discover ready: false, restart count 0 Oct 23 04:30:43.831: INFO: Container init ready: false, restart count 0 Oct 23 04:30:43.831: INFO: Container install ready: false, restart count 0 Oct 23 04:30:43.831: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:30:43.831: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:30:43.831: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:30:43.831: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:30:43.831: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:30:43.831: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:30:43.831: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:30:43.831: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:30:43.831: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:30:43.831: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:30:43.831: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:30:43.831: INFO: Container collectd ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:30:43.831: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:30:43.831: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:30:43.831: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container grafana ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:30:43.831: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:30:43.831: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:30:43.831: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:30:43.831: INFO: up-down-2-dhj65 from services-6579 started at 2021-10-23 04:28:41 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container up-down-2 ready: false, restart count 0 Oct 23 04:30:43.831: INFO: up-down-2-mp5nt from services-6579 started at 2021-10-23 04:28:41 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container up-down-2 ready: false, restart count 0 Oct 23 04:30:43.831: INFO: up-down-2-v2fsk from services-6579 started at 2021-10-23 04:28:41 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container up-down-2 ready: false, restart count 0 Oct 23 04:30:43.831: INFO: up-down-3-9l57p from services-6579 started at 2021-10-23 04:29:54 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.831: INFO: Container up-down-3 ready: false, restart count 0 Oct 23 04:30:43.831: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:30:43.839: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:30:43.839: INFO: Container discover ready: false, restart count 0 Oct 23 04:30:43.839: INFO: Container init ready: false, restart count 0 Oct 23 04:30:43.839: INFO: Container install ready: false, restart count 0 Oct 23 04:30:43.839: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:30:43.839: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:30:43.839: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:30:43.839: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:30:43.839: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:30:43.839: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:30:43.839: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:30:43.839: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:30:43.839: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:30:43.839: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:30:43.839: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:30:43.839: INFO: Container collectd ready: true, restart count 0 Oct 23 04:30:43.839: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:30:43.839: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:30:43.839: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:30:43.839: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:30:43.839: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:30:43.839: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:30:43.839: INFO: up-down-3-7n2vs from services-6579 started at 2021-10-23 04:29:52 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container up-down-3 ready: false, restart count 0 Oct 23 04:30:43.839: INFO: up-down-3-r6f62 from services-6579 started at 2021-10-23 04:29:52 +0000 UTC (1 container statuses recorded) Oct 23 04:30:43.839: INFO: Container up-down-3 ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e97ed2f9341], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e98350dd136], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e98ac9e81bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3204/filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e9901234494], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e9912de8af3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 297.479018ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e991abe48dc], Reason = [Created], Message = [Created container filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6] STEP: Considering event: Type = [Normal], Name = [filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6.16b08e99283afd95], Reason = [Started], Message = [Started container filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6] STEP: Considering event: Type = [Normal], Name = [without-label.16b08e96fcb3daa9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3204/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16b08e976469f1e9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b08e97779dc6f2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 322.157374ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b08e977e5196fd], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b08e97852a6521], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b08e97ebf7da4a], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod597cb33f-0dbf-41f7-a831-cd9cd5a06f3b.16b08e99541908e3], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:30:54.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3204" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":1,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:30:54.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:30:54.999: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:30:55.008: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:30:55.010: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:30:55.020: INFO: startup-script from conntrack-5350 started at 2021-10-23 04:29:10 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container startup-script ready: false, restart count 0 Oct 23 04:30:55.020: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:30:55.020: INFO: Container discover ready: false, restart count 0 Oct 23 04:30:55.020: INFO: Container init ready: false, restart count 0 Oct 23 04:30:55.020: INFO: Container install ready: false, restart count 0 Oct 23 04:30:55.020: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:30:55.020: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:30:55.020: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:30:55.020: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:30:55.020: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:30:55.020: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:30:55.020: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:30:55.020: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:30:55.020: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:30:55.020: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:30:55.020: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:30:55.020: INFO: Container collectd ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:30:55.020: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:30:55.020: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:30:55.020: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container grafana ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:30:55.020: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:30:55.020: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:30:55.020: INFO: filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6 from sched-pred-3204 started at 2021-10-23 04:30:51 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.020: INFO: Container filler-pod-69fcf1a6-2c52-461b-99ea-39d5ad75a0a6 ready: true, restart count 0 Oct 23 04:30:55.020: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:30:55.031: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:30:55.031: INFO: Container discover ready: false, restart count 0 Oct 23 04:30:55.031: INFO: Container init ready: false, restart count 0 Oct 23 04:30:55.031: INFO: Container install ready: false, restart count 0 Oct 23 04:30:55.031: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:30:55.031: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:30:55.031: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:30:55.031: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:30:55.031: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:30:55.031: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:30:55.031: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:30:55.031: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:30:55.031: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:30:55.031: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:30:55.031: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:30:55.031: INFO: Container collectd ready: true, restart count 0 Oct 23 04:30:55.031: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:30:55.031: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:30:55.031: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:30:55.031: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:30:55.031: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:30:55.031: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:30:55.031: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-af9cd672-0ff0-4d5b-a4d8-84242379741f=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-88db94f9-2d2b-4f7a-a009-7575046849fd testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9996ad7c40], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8276/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e99f09296f8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a00c75873], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 271.887672ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a07ade822], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a0e748f9d], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a865b70dd], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b08e9a882c0443], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-af9cd672-0ff0-4d5b-a4d8-84242379741f: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b08e9a882c0443], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-af9cd672-0ff0-4d5b-a4d8-84242379741f: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9996ad7c40], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8276/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e99f09296f8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a00c75873], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 271.887672ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a07ade822], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a0e748f9d], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b08e9a865b70dd], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-af9cd672-0ff0-4d5b-a4d8-84242379741f=testing-taint-value:NoSchedule STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b08e9ac525069e], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-af9cd672-0ff0-4d5b-a4d8-84242379741f: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b08e9b3caaaddc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8276/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-88db94f9-2d2b-4f7a-a009-7575046849fd off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-88db94f9-2d2b-4f7a-a009-7575046849fd STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-af9cd672-0ff0-4d5b-a4d8-84242379741f=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:31:03.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8276" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":2,"skipped":2375,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:31:03.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:31:03.184: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:31:03.199: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:31:03.202: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:31:03.212: INFO: startup-script from conntrack-5350 started at 2021-10-23 04:29:10 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container startup-script ready: false, restart count 0 Oct 23 04:31:03.212: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:31:03.212: INFO: Container discover ready: false, restart count 0 Oct 23 04:31:03.212: INFO: Container init ready: false, restart count 0 Oct 23 04:31:03.212: INFO: Container install ready: false, restart count 0 Oct 23 04:31:03.212: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:31:03.212: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:31:03.212: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:31:03.212: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:31:03.212: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:31:03.212: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:31:03.212: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:31:03.212: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:31:03.212: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:31:03.212: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:31:03.212: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:31:03.212: INFO: Container collectd ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:31:03.212: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:31:03.212: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:31:03.212: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container grafana ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:31:03.212: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:31:03.212: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:31:03.212: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:31:03.219: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:31:03.219: INFO: Container discover ready: false, restart count 0 Oct 23 04:31:03.219: INFO: Container init ready: false, restart count 0 Oct 23 04:31:03.219: INFO: Container install ready: false, restart count 0 Oct 23 04:31:03.219: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:31:03.219: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:31:03.219: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:31:03.219: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:31:03.219: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:31:03.219: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:31:03.219: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:31:03.219: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:31:03.219: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:31:03.219: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:31:03.219: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:31:03.219: INFO: Container collectd ready: true, restart count 0 Oct 23 04:31:03.219: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:31:03.219: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:31:03.219: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:31:03.219: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:31:03.219: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:31:03.219: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:31:03.219: INFO: still-no-tolerations from sched-pred-8276 started at 2021-10-23 04:31:02 +0000 UTC (1 container statuses recorded) Oct 23 04:31:03.219: INFO: Container still-no-tolerations ready: false, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:31:17.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6677" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.192 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":3,"skipped":2377,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:31:17.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:31:17.389: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:31:17.397: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:31:17.400: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:31:17.410: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:31:17.410: INFO: Container discover ready: false, restart count 0 Oct 23 04:31:17.410: INFO: Container init ready: false, restart count 0 Oct 23 04:31:17.410: INFO: Container install ready: false, restart count 0 Oct 23 04:31:17.410: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:31:17.410: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:31:17.410: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:31:17.410: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:31:17.411: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:31:17.411: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:31:17.411: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:31:17.411: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:31:17.411: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:31:17.411: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:31:17.411: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:31:17.411: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:31:17.411: INFO: Container collectd ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:31:17.411: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:31:17.411: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:31:17.411: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container grafana ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:31:17.411: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:31:17.411: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:31:17.411: INFO: rs-e2e-pts-filter-j7xtr from sched-pred-6677 started at 2021-10-23 04:31:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 04:31:17.411: INFO: rs-e2e-pts-filter-kcc5n from sched-pred-6677 started at 2021-10-23 04:31:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.411: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 04:31:17.411: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:31:17.427: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:31:17.427: INFO: Container discover ready: false, restart count 0 Oct 23 04:31:17.427: INFO: Container init ready: false, restart count 0 Oct 23 04:31:17.427: INFO: Container install ready: false, restart count 0 Oct 23 04:31:17.427: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:31:17.427: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:31:17.427: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:31:17.427: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:31:17.427: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:31:17.427: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:31:17.427: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:31:17.427: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:31:17.427: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:31:17.427: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:31:17.427: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:31:17.427: INFO: Container collectd ready: true, restart count 0 Oct 23 04:31:17.427: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:31:17.427: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:31:17.427: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:31:17.427: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:31:17.427: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:31:17.427: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:31:17.427: INFO: rs-e2e-pts-filter-c6n7n from sched-pred-6677 started at 2021-10-23 04:31:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 04:31:17.427: INFO: rs-e2e-pts-filter-s77b5 from sched-pred-6677 started at 2021-10-23 04:31:11 +0000 UTC (1 container statuses recorded) Oct 23 04:31:17.427: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b08e9ecebd4054], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:31:18.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3981" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":4,"skipped":3336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:31:18.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 04:31:18.500: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 04:32:18.552: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:32:18.554: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 04:32:18.571: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 04:32:18.571: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 04:32:18.571: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 04:32:18.571: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 04:32:18.588: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:32:18.588: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:32:18.588: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.588: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:32:18.588: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Oct 23 04:32:18.601: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:32:18.601: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:32:18.601: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:32:18.601: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:32:18.601: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 04:32:18.617: INFO: Waiting for running... Oct 23 04:32:18.617: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:32:23.687: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 04:32:23.687: INFO: Node: node1, totalRequestedMemResource: 1340355450880, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:32:23.687: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Pod for on the node: fce9c527-986f-4aea-aa23-dbac5e534d4b-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:32:23.687: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 04:32:23.687: INFO: Node: node2, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-526094fc-f73f-48d6-a34a=testing-taint-value-24559c76-b88c-477c-b76a-75418dae5bd0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-20ccabe0-1ed2-49b3-adc8=testing-taint-value-9fb00dc6-bc03-497c-a3a0-cf66dc52679e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-49edb55e-32ad-4aea-b1a7=testing-taint-value-ac609819-2e2c-4cd3-bb23-d5c1fac17db3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ad6063fe-d05a-48ea-b5b7=testing-taint-value-48d80ea7-3f58-48bd-bbea-7ff2cb0c44df:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-51005e51-abce-43ec-aea8=testing-taint-value-4964babc-475d-4e07-8ef1-34993364008d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-97cc7339-e4ee-4e89-bb70=testing-taint-value-eb8e2e41-e5dc-4b93-a45d-873c19feb294:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b57abe95-e9bd-44f3-a1e5=testing-taint-value-9e93a1d7-e345-4e4f-9071-26d8440513ed:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9ad7e593-ddf4-4aba-84a5=testing-taint-value-a7714902-424c-4333-923e-60cfce2d4db9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-29d2d082-1273-4877-825b=testing-taint-value-058bd1e8-2147-4e21-b33b-a5c641a61514:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cca078d7-bfb5-4a4a-a27a=testing-taint-value-8a7723a3-0708-4cd3-9f6e-d5bc58bc1866:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-15d25704-2135-4441-a938=testing-taint-value-abda5fc4-4ddb-4a1c-ab21-cd32e19e2798:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c72683e9-2656-412f-9b2e=testing-taint-value-83836962-ca7e-4a62-9b22-ccb204e7db01:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c16fb7af-2897-4e5e-aff2=testing-taint-value-f4f07e69-6735-4c4a-bbef-dd485ab1372a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b6357bb1-b3db-403c-8010=testing-taint-value-fe244c99-f6da-4360-866b-e5df463228e9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-490835cf-4f32-4ef8-a741=testing-taint-value-685d0abf-add7-413f-81c5-172903c7499b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f1822bf7-76c3-4e75-8c0d=testing-taint-value-cbcde02e-e4e0-48f6-982c-b128b4e44762:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-970f2908-1f29-4510-b38f=testing-taint-value-cb1c9751-ce09-4d7e-9858-240db87cf7c4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-658d65ed-6335-42d2-88db=testing-taint-value-891626ff-b095-4852-88b9-25f20c6ea280:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b711afae-f42a-4ef7-912d=testing-taint-value-a8a4f3cc-f257-49e6-b976-d3f54c259a2d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dec58b73-9415-4c0d-8561=testing-taint-value-cbc7e240-6a3b-4e72-878b-ec3c4b014aad:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-15d25704-2135-4441-a938=testing-taint-value-abda5fc4-4ddb-4a1c-ab21-cd32e19e2798:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c72683e9-2656-412f-9b2e=testing-taint-value-83836962-ca7e-4a62-9b22-ccb204e7db01:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c16fb7af-2897-4e5e-aff2=testing-taint-value-f4f07e69-6735-4c4a-bbef-dd485ab1372a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b6357bb1-b3db-403c-8010=testing-taint-value-fe244c99-f6da-4360-866b-e5df463228e9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-490835cf-4f32-4ef8-a741=testing-taint-value-685d0abf-add7-413f-81c5-172903c7499b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f1822bf7-76c3-4e75-8c0d=testing-taint-value-cbcde02e-e4e0-48f6-982c-b128b4e44762:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-970f2908-1f29-4510-b38f=testing-taint-value-cb1c9751-ce09-4d7e-9858-240db87cf7c4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-658d65ed-6335-42d2-88db=testing-taint-value-891626ff-b095-4852-88b9-25f20c6ea280:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b711afae-f42a-4ef7-912d=testing-taint-value-a8a4f3cc-f257-49e6-b976-d3f54c259a2d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dec58b73-9415-4c0d-8561=testing-taint-value-cbc7e240-6a3b-4e72-878b-ec3c4b014aad:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-526094fc-f73f-48d6-a34a=testing-taint-value-24559c76-b88c-477c-b76a-75418dae5bd0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-20ccabe0-1ed2-49b3-adc8=testing-taint-value-9fb00dc6-bc03-497c-a3a0-cf66dc52679e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-49edb55e-32ad-4aea-b1a7=testing-taint-value-ac609819-2e2c-4cd3-bb23-d5c1fac17db3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ad6063fe-d05a-48ea-b5b7=testing-taint-value-48d80ea7-3f58-48bd-bbea-7ff2cb0c44df:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-51005e51-abce-43ec-aea8=testing-taint-value-4964babc-475d-4e07-8ef1-34993364008d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-97cc7339-e4ee-4e89-bb70=testing-taint-value-eb8e2e41-e5dc-4b93-a45d-873c19feb294:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b57abe95-e9bd-44f3-a1e5=testing-taint-value-9e93a1d7-e345-4e4f-9071-26d8440513ed:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9ad7e593-ddf4-4aba-84a5=testing-taint-value-a7714902-424c-4333-923e-60cfce2d4db9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-29d2d082-1273-4877-825b=testing-taint-value-058bd1e8-2147-4e21-b33b-a5c641a61514:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cca078d7-bfb5-4a4a-a27a=testing-taint-value-8a7723a3-0708-4cd3-9f6e-d5bc58bc1866:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:32:35.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3971" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:76.565 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":5,"skipped":3686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:32:35.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 04:32:35.069: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 04:33:35.122: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:33:35.125: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 04:33:35.145: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 04:33:35.145: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 04:33:35.145: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 04:33:35.145: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 04:33:35.165: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:33:35.165: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:33:35.165: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:35.165: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:33:35.165: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Oct 23 04:33:43.259: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:33:43.259: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 04:33:43.259: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:33:43.259: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:33:43.259: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:33:43.270: INFO: Waiting for running... Oct 23 04:33:43.274: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:33:48.342: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 04:33:48.342: INFO: Node: node2, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:33:48.342: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Pod for on the node: 1b8d06f7-0983-496c-b703-c394ed43583d-0, Cpu: 38400, Mem: 89350039552 Oct 23 04:33:48.342: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 04:33:48.342: INFO: Node: node1, totalRequestedMemResource: 1340355450880, memAllocatableVal: 178884632576, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:34:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5260" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:89.381 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":6,"skipped":3824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:34:04.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 04:34:04.455: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 04:35:04.505: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:35:04.508: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 04:35:04.525: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 04:35:04.525: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 04:35:04.525: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 04:35:04.525: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 04:35:04.542: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:35:04.542: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:35:04.542: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:35:04.542: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:35:04.542: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Oct 23 04:35:08.588: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.588: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:35:08.588: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:35:08.588: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:08.589: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:35:08.589: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 04:35:08.600: INFO: Waiting for running... Oct 23 04:35:08.604: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:35:13.674: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:35:13.674: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:35:13.674: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 04:35:13.674: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:35:13.674: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:35:35.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3077" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:91.300 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":7,"skipped":4040,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:35:35.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:35:35.759: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:35:35.768: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:35:35.770: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:35:35.782: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:35:35.782: INFO: Container discover ready: false, restart count 0 Oct 23 04:35:35.782: INFO: Container init ready: false, restart count 0 Oct 23 04:35:35.782: INFO: Container install ready: false, restart count 0 Oct 23 04:35:35.782: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:35:35.782: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:35:35.782: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:35:35.782: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:35:35.782: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:35:35.782: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:35:35.782: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:35:35.782: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:35:35.782: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:35:35.782: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:35:35.782: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:35:35.782: INFO: Container collectd ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:35:35.782: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:35:35.782: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:35:35.782: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container grafana ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:35:35.782: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:35:35.782: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:35:35.782: INFO: pod-with-pod-antiaffinity from sched-priority-3077 started at 2021-10-23 04:35:13 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.782: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Oct 23 04:35:35.782: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:35:35.810: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:35:35.810: INFO: Container discover ready: false, restart count 0 Oct 23 04:35:35.810: INFO: Container init ready: false, restart count 0 Oct 23 04:35:35.810: INFO: Container install ready: false, restart count 0 Oct 23 04:35:35.810: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:35:35.810: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:35:35.810: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:35:35.810: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:35:35.810: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:35:35.810: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:35:35.810: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:35:35.810: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:35:35.810: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:35:35.810: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:35:35.810: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:35:35.810: INFO: Container collectd ready: true, restart count 0 Oct 23 04:35:35.810: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:35:35.810: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:35:35.810: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:35:35.810: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:35:35.810: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:35:35.810: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:35:35.810: INFO: pod-with-label-security-s1 from sched-priority-3077 started at 2021-10-23 04:35:04 +0000 UTC (1 container statuses recorded) Oct 23 04:35:35.810: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-291976e0-52df-45d8-b8b6-8bac52904bc1 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-291976e0-52df-45d8-b8b6-8bac52904bc1 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-291976e0-52df-45d8-b8b6-8bac52904bc1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:35:43.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5859" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.152 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":8,"skipped":4462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:35:43.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:35:43.914: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:35:43.923: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:35:43.925: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:35:43.935: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:35:43.935: INFO: Container discover ready: false, restart count 0 Oct 23 04:35:43.935: INFO: Container init ready: false, restart count 0 Oct 23 04:35:43.935: INFO: Container install ready: false, restart count 0 Oct 23 04:35:43.935: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:35:43.935: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:35:43.935: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:35:43.935: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:35:43.935: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:35:43.935: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:35:43.935: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:35:43.935: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:35:43.935: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:35:43.935: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:35:43.935: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:35:43.935: INFO: Container collectd ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:35:43.935: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:35:43.935: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:35:43.935: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container grafana ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:35:43.935: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:35:43.935: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:35:43.935: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:35:43.935: INFO: pod-with-pod-antiaffinity from sched-priority-3077 started at 2021-10-23 04:35:13 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.935: INFO: Container pod-with-pod-antiaffinity ready: false, restart count 0 Oct 23 04:35:43.935: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:35:43.944: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:35:43.944: INFO: Container discover ready: false, restart count 0 Oct 23 04:35:43.944: INFO: Container init ready: false, restart count 0 Oct 23 04:35:43.944: INFO: Container install ready: false, restart count 0 Oct 23 04:35:43.944: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:35:43.944: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:35:43.944: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:35:43.944: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:35:43.944: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:35:43.944: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:35:43.944: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:35:43.944: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:35:43.944: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:35:43.944: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:35:43.944: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:35:43.944: INFO: Container collectd ready: true, restart count 0 Oct 23 04:35:43.944: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:35:43.944: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:35:43.944: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:35:43.944: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:35:43.944: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:35:43.944: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:35:43.944: INFO: with-labels from sched-pred-5859 started at 2021-10-23 04:35:39 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container with-labels ready: true, restart count 0 Oct 23 04:35:43.944: INFO: pod-with-label-security-s1 from sched-priority-3077 started at 2021-10-23 04:35:04 +0000 UTC (1 container statuses recorded) Oct 23 04:35:43.944: INFO: Container pod-with-label-security-s1 ready: false, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Oct 23 04:35:43.986: INFO: Pod cmk-kn29k requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod cmk-t9r2t requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod cmk-webhook-6c9d5f8578-pkwhc requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod kube-flannel-2cdvd requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod kube-flannel-xx6ls requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod kube-multus-ds-amd64-fww5b requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod kube-multus-ds-amd64-l97s4 requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod kube-proxy-5h2bl requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod kube-proxy-m9z8s requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod kubernetes-dashboard-785dcbb76d-kc4kh requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod kubernetes-metrics-scraper-5558854cb-dfn2n requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod node-feature-discovery-worker-2pvq5 requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod node-feature-discovery-worker-8k8m5 requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod collectd-n9sbv requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod collectd-xhdgw requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod node-exporter-fjc79 requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod node-exporter-v656r requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod prometheus-operator-585ccfb458-hwjk2 requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-gltgg requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod with-labels requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod pod-with-label-security-s1 requesting local ephemeral resource =0 on Node node2 Oct 23 04:35:43.986: INFO: Pod pod-with-pod-antiaffinity requesting local ephemeral resource =0 on Node node1 Oct 23 04:35:43.986: INFO: Using pod capacity: 40542413347 Oct 23 04:35:43.986: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Oct 23 04:35:43.986: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Oct 23 04:35:44.172: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b08edcddc6364a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b08ede67d0a0a5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b08ede914f3062], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 696.149915ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b08edea4defb03], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b08edee7f2b306], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b08edcde55ea1d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b08ede530fd929], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b08ede99c33ee8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.186153328s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b08edec173b970], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b08edf029b30a2], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b08edce31e64ae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b08edef5788d90], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b08edf6394abf2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.847328688s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b08edf6a75ce5a], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b08edf7204df46], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b08edce3b51832], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b08edf92b49a6e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b08edfc3183086], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 811.82216ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b08edfca0177a8], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b08edfd18a4eb7], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b08edce44441ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b08edec5255cf8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b08edf3e30e727], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 2.030790761s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b08edf595ea3ba], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b08edf61f57252], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b08edce4d573bc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b08edf55f117b0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b08edfbad1478d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.692407088s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b08edfc13dc0ad], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b08edfd5ad0485], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b08edce56d9840], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b08edf92d7e199], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b08edfed3e4920], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.51665622s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b08edff3c297d9], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b08edffbbcb897], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b08edce5ed6817], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b08edde5f9710c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b08eddfd1eeee4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 388.326319ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b08ede1c4fd7d8], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b08ede4cb8b4b7], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b08edce67d453a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-16 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b08edf92d27b11], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b08edfd8351feb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.164087441s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b08edfdedd7256], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b08edfe53d0d78], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b08edce7042aa1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b08edebd5f221f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b08edef8303249], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 986.771ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b08edf2f92fd94], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b08edf5ace7dac], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b08edce787f80e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b08edf55e1ad55], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b08edf8d8ab5a8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 933.818648ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b08edf9408b9d6], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b08edf9ac6c750], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b08edce8138467], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b08ede52f73a7f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b08ede82436b18], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 793.512566ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b08edea3cbc6b2], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b08edeffaf1d97], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b08edcded608c6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b08edf556a6503], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b08edf7829ff45], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 582.974808ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b08edf7ef7637e], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b08edf8646264d], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b08edcdf60b06f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b08edf924156f9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b08edfaa0af9fa], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 399.084104ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b08edfb19a3faf], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b08edfb97e38b7], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b08edcdfefe153], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b08edf35ffa285], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b08edf4fa7e5f9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 430.451407ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b08edf8da6858c], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b08edfa2bdc8ff], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b08edce0834b20], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b08ede61747d4d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b08ede7bc1e1d7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 441.271084ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b08ede8f9e6b33], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b08eded803ae9a], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b08edce114e71d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b08edf50d37716], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b08edf6df96ae9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 489.019842ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b08edf9456ed1b], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b08edfa4115c51], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b08edce1896a12], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b08edd5fa997e9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b08edd7ffcdf6b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 542.316614ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b08edda7f156ba], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b08ede7ce6d8c0], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b08edce2186e77], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b08edef2d73fc3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b08edf07159b70], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 339.617266ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b08edf35f50994], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b08edfa14556aa], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b08edce29a79b9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8880/overcommit-9 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b08edf55f051ba], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b08edfa408df04], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.310224225s] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b08edfaaf51033], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b08edfb41584c3], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b08ee06a42fde2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:36:00.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8880" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.380 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":9,"skipped":4532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:36:00.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:36:00.296: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:36:00.304: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:36:00.306: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:36:00.317: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:36:00.317: INFO: Container discover ready: false, restart count 0 Oct 23 04:36:00.318: INFO: Container init ready: false, restart count 0 Oct 23 04:36:00.318: INFO: Container install ready: false, restart count 0 Oct 23 04:36:00.318: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:36:00.318: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:36:00.318: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:36:00.318: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:36:00.318: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:36:00.318: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:36:00.318: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:36:00.318: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:36:00.318: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:36:00.318: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:36:00.318: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:36:00.318: INFO: Container collectd ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:36:00.318: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:36:00.318: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:36:00.318: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container grafana ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:36:00.318: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:36:00.318: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-1 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-1 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-10 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-10 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-12 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-12 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-13 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-13 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-15 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-15 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-17 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-17 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-18 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-18 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-19 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-19 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-2 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-2 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: overcommit-9 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.318: INFO: Container overcommit-9 ready: true, restart count 0 Oct 23 04:36:00.318: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:36:00.326: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:36:00.326: INFO: Container discover ready: false, restart count 0 Oct 23 04:36:00.326: INFO: Container init ready: false, restart count 0 Oct 23 04:36:00.326: INFO: Container install ready: false, restart count 0 Oct 23 04:36:00.326: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:36:00.326: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:36:00.326: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:36:00.326: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:36:00.326: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:36:00.326: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:36:00.326: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:36:00.326: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:36:00.326: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:36:00.326: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:36:00.326: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:36:00.326: INFO: Container collectd ready: true, restart count 0 Oct 23 04:36:00.326: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:36:00.326: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:36:00.326: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:36:00.326: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:36:00.326: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:36:00.326: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-0 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-0 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-11 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-11 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-14 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-14 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-16 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-16 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-3 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-3 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-4 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-4 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-5 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-5 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-6 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-6 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-7 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-7 ready: true, restart count 0 Oct 23 04:36:00.326: INFO: overcommit-8 from sched-pred-8880 started at 2021-10-23 04:35:44 +0000 UTC (1 container statuses recorded) Oct 23 04:36:00.326: INFO: Container overcommit-8 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-85688f85-76e7-4307-ae6f-b798639494e0=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-2956c873-c98c-490e-b50f-6a866547a17d testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-2956c873-c98c-490e-b50f-6a866547a17d off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-2956c873-c98c-490e-b50f-6a866547a17d STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-85688f85-76e7-4307-ae6f-b798639494e0=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:36:10.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4505" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.164 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":10,"skipped":4710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:36:10.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 04:36:10.464: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 04:36:10.472: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:36:10.474: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 04:36:10.483: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 04:36:10.483: INFO: Container discover ready: false, restart count 0 Oct 23 04:36:10.483: INFO: Container init ready: false, restart count 0 Oct 23 04:36:10.483: INFO: Container install ready: false, restart count 0 Oct 23 04:36:10.483: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:36:10.483: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:36:10.483: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:36:10.483: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.483: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:36:10.483: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.483: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:36:10.483: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.484: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:36:10.484: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.484: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:36:10.484: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.484: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:36:10.484: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.484: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:36:10.484: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.484: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:36:10.484: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.484: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:36:10.484: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:36:10.484: INFO: Container collectd ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:36:10.484: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:36:10.484: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:36:10.484: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 04:36:10.484: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container grafana ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:36:10.484: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 04:36:10.484: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:36:10.484: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 04:36:10.500: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 04:36:10.500: INFO: Container discover ready: false, restart count 0 Oct 23 04:36:10.500: INFO: Container init ready: false, restart count 0 Oct 23 04:36:10.500: INFO: Container install ready: false, restart count 0 Oct 23 04:36:10.500: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 04:36:10.500: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:36:10.500: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:36:10.500: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:36:10.500: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:36:10.500: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:36:10.500: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:36:10.500: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:36:10.500: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:36:10.500: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:36:10.500: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 04:36:10.500: INFO: Container collectd ready: true, restart count 0 Oct 23 04:36:10.500: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:36:10.500: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:36:10.500: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 04:36:10.500: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:36:10.500: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:36:10.500: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:36:10.500: INFO: with-tolerations from sched-pred-4505 started at 2021-10-23 04:36:04 +0000 UTC (1 container statuses recorded) Oct 23 04:36:10.500: INFO: Container with-tolerations ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f3994523-44b2-44fd-936f-c3b7656159bc 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f3994523-44b2-44fd-936f-c3b7656159bc off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f3994523-44b2-44fd-936f-c3b7656159bc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:36:28.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3108" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":11,"skipped":5161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:36:28.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 04:36:28.647: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 04:37:28.700: INFO: Waiting for terminating namespaces to be deleted... Oct 23 04:37:28.702: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 04:37:28.723: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 04:37:28.723: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 04:37:28.723: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 04:37:28.723: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 04:37:28.738: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:37:28.738: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:37:28.738: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.738: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:37:28.738: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Oct 23 04:37:28.755: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:37:28.755: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 04:37:28.755: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.755: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 04:37:28.756: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 04:37:28.756: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 04:37:28.771: INFO: Waiting for running... Oct 23 04:37:28.773: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:37:33.841: INFO: ComputeCPUMemFraction for node: node1 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 04:37:33.841: INFO: Node: node1, totalRequestedMemResource: 1340355481600, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 04:37:33.841: INFO: ComputeCPUMemFraction for node: node2 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.841: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.842: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.842: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.842: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.842: INFO: Pod for on the node: f142921f-dfcd-481c-a94e-d1a7666d6108-0, Cpu: 38400, Mem: 89350041600 Oct 23 04:37:33.842: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 04:37:33.842: INFO: Node: node2, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-809 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-809, will wait for the garbage collector to delete the pods Oct 23 04:37:40.027: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.124137ms Oct 23 04:37:40.127: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.610064ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:37:58.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-809" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:89.634 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":12,"skipped":5249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:37:58.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 04:37:58.294: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 04:38:58.345: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:39:44.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-812" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:106.398 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":13,"skipped":5621,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 23 04:39:44.661: INFO: Running AfterSuite actions on all nodes Oct 23 04:39:44.661: INFO: Running AfterSuite actions on node 1 Oct 23 04:39:44.661: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 541.021 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 9m2.267275198s Test Suite Passed