I0429 23:50:47.575571 22 e2e.go:129] Starting e2e run "d6dcb9df-c364-4e37-a745-bf0f3e863ead" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651276246 - Will randomize all specs Will run 13 of 5773 specs Apr 29 23:50:47.590: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:50:47.595: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 23:50:47.624: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:50:47.687: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:50:47.687: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:50:47.687: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:50:47.687: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:50:47.687: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 23:50:47.698: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 29 23:50:47.698: INFO: e2e test version: v1.21.9 Apr 29 23:50:47.699: INFO: kube-apiserver version: v1.21.1 Apr 29 23:50:47.699: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:50:47.703: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:50:47.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0429 23:50:47.726334 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:50:47.726: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:50:47.729: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:50:47.732: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:50:47.740: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:50:47.742: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:50:47.753: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:50:47.754: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:50:47.754: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:50:47.754: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:50:47.754: INFO: Container discover ready: false, restart count 0 Apr 29 23:50:47.754: INFO: Container init ready: false, restart count 0 Apr 29 23:50:47.754: INFO: Container install ready: false, restart count 0 Apr 29 23:50:47.754: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.754: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:50:47.754: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.754: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:50:47.754: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.754: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:50:47.754: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.754: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:50:47.754: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.754: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:50:47.754: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.754: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:50:47.755: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.755: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:50:47.755: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.755: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:50:47.755: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:50:47.755: INFO: Container collectd ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:50:47.755: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:50:47.755: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:50:47.755: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:50:47.755: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Container grafana ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:50:47.755: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.755: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:50:47.755: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:50:47.764: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:50:47.764: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:50:47.764: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:50:47.764: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:50:47.764: INFO: Container discover ready: false, restart count 0 Apr 29 23:50:47.764: INFO: Container init ready: false, restart count 0 Apr 29 23:50:47.764: INFO: Container install ready: false, restart count 0 Apr 29 23:50:47.764: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:50:47.764: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:50:47.764: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:50:47.764: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:50:47.764: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:50:47.764: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:50:47.764: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:50:47.764: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:50:47.764: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:50:47.764: INFO: Container collectd ready: true, restart count 0 Apr 29 23:50:47.764: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:50:47.764: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:50:47.764: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:50:47.764: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:50:47.764: INFO: Container node-exporter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea8300907e3ebb], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea8300deddc071], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea83015674e12f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-28/filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea8301e1139cce], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea8301f833a67a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 387.969428ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea8301fee269cd], Reason = [Created], Message = [Created container filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090] STEP: Considering event: Type = [Normal], Name = [filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090.16ea83020587d1d4], Reason = [Started], Message = [Started container filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090] STEP: Considering event: Type = [Normal], Name = [without-label.16ea82ffa013e525], Reason = [Scheduled], Message = [Successfully assigned sched-pred-28/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16ea82fff5003977], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16ea83000a8a35a8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 361.356477ms] STEP: Considering event: Type = [Normal], Name = [without-label.16ea8300111b80ac], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16ea830017abc4a9], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16ea83008fab534e], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod4f6216e3-b6ec-40c3-81b9-343d7a19ac94.16ea83026ebf6ff1], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:51:00.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-28" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.185 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":1,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:51:00.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:51:00.918: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:51:00.925: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:51:00.928: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:51:00.940: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:51:00.940: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:51:00.940: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:51:00.940: INFO: Container discover ready: false, restart count 0 Apr 29 23:51:00.940: INFO: Container init ready: false, restart count 0 Apr 29 23:51:00.940: INFO: Container install ready: false, restart count 0 Apr 29 23:51:00.940: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:51:00.940: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:51:00.940: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:51:00.940: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:51:00.940: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:51:00.940: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:51:00.940: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:51:00.940: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:51:00.940: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:51:00.940: INFO: Container collectd ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:51:00.940: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:51:00.940: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:51:00.940: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:51:00.940: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container grafana ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:51:00.940: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.940: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:51:00.940: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:51:00.960: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:51:00.960: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:51:00.960: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:51:00.960: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:51:00.960: INFO: Container discover ready: false, restart count 0 Apr 29 23:51:00.960: INFO: Container init ready: false, restart count 0 Apr 29 23:51:00.960: INFO: Container install ready: false, restart count 0 Apr 29 23:51:00.960: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:51:00.960: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:51:00.960: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:51:00.960: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:51:00.960: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:51:00.960: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:51:00.960: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:51:00.960: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:51:00.960: INFO: Container collectd ready: true, restart count 0 Apr 29 23:51:00.960: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:51:00.960: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:51:00.960: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:51:00.960: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:51:00.960: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:51:00.960: INFO: filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090 from sched-pred-28 started at 2022-04-29 23:50:55 +0000 UTC (1 container statuses recorded) Apr 29 23:51:00.960: INFO: Container filler-pod-bfc9021f-d506-46ab-8f83-4cd625065090 ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16ea83041bc23e76], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:51:08.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8303" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.176 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":2,"skipped":592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:51:08.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:51:08.097: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:51:08.105: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:51:08.106: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:51:08.117: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:51:08.117: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:51:08.117: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:51:08.117: INFO: Container discover ready: false, restart count 0 Apr 29 23:51:08.117: INFO: Container init ready: false, restart count 0 Apr 29 23:51:08.117: INFO: Container install ready: false, restart count 0 Apr 29 23:51:08.117: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:51:08.117: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:51:08.117: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:51:08.117: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:51:08.117: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:51:08.117: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:51:08.117: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:51:08.117: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:51:08.117: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:51:08.117: INFO: Container collectd ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:51:08.117: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:51:08.117: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:51:08.117: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:51:08.117: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container grafana ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:51:08.117: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.117: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:51:08.117: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:51:08.126: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:51:08.126: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:51:08.126: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:51:08.126: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:51:08.126: INFO: Container discover ready: false, restart count 0 Apr 29 23:51:08.126: INFO: Container init ready: false, restart count 0 Apr 29 23:51:08.126: INFO: Container install ready: false, restart count 0 Apr 29 23:51:08.126: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:51:08.126: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:51:08.126: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:51:08.126: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:51:08.126: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:51:08.126: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:51:08.126: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:51:08.126: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:51:08.126: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:51:08.126: INFO: Container collectd ready: true, restart count 0 Apr 29 23:51:08.126: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:51:08.126: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:51:08.126: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:51:08.126: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:51:08.126: INFO: Container node-exporter ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-cc0cbfc0-e476-44bb-97ea-19d6b3e433d0=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-26b64cac-0ae7-42c1-9825-b7286b827055 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-26b64cac-0ae7-42c1-9825-b7286b827055 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-26b64cac-0ae7-42c1-9825-b7286b827055 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-cc0cbfc0-e476-44bb-97ea-19d6b3e433d0=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:51:16.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9118" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.170 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":3,"skipped":834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:51:16.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:51:16.265: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:51:16.273: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:51:16.276: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:51:16.283: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:51:16.283: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:51:16.283: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:51:16.283: INFO: Container discover ready: false, restart count 0 Apr 29 23:51:16.283: INFO: Container init ready: false, restart count 0 Apr 29 23:51:16.283: INFO: Container install ready: false, restart count 0 Apr 29 23:51:16.283: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:51:16.283: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:51:16.283: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:51:16.283: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:51:16.283: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:51:16.283: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:51:16.283: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:51:16.283: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:51:16.283: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:51:16.283: INFO: Container collectd ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:51:16.283: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:51:16.283: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:51:16.283: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:51:16.283: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container grafana ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:51:16.283: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.283: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:51:16.283: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:51:16.292: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:51:16.292: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:51:16.292: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:51:16.292: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:51:16.292: INFO: Container discover ready: false, restart count 0 Apr 29 23:51:16.292: INFO: Container init ready: false, restart count 0 Apr 29 23:51:16.292: INFO: Container install ready: false, restart count 0 Apr 29 23:51:16.292: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:51:16.292: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:51:16.292: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:51:16.292: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:51:16.292: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:51:16.292: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:51:16.292: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.292: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:51:16.292: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:51:16.292: INFO: Container collectd ready: true, restart count 0 Apr 29 23:51:16.292: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:51:16.292: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:51:16.292: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:51:16.292: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:51:16.292: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:51:16.292: INFO: with-tolerations from sched-pred-9118 started at 2022-04-29 23:51:12 +0000 UTC (1 container statuses recorded) Apr 29 23:51:16.293: INFO: Container with-tolerations ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f39df492-c64d-44f5-aaca-4fbe69095c65=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-49a5ba02-daf3-4ffe-a229-bc3a5ad5687e testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306437ef55a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1363/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea83069baf9d13], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306aebc8287], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 319.606367ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306b556cb73], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306bb8087d1], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea830732ec2b99], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16ea830734d0b121], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-f39df492-c64d-44f5-aaca-4fbe69095c65: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16ea830734d0b121], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-f39df492-c64d-44f5-aaca-4fbe69095c65: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306437ef55a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1363/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea83069baf9d13], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306aebc8287], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 319.606367ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306b556cb73], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea8306bb8087d1], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16ea830732ec2b99], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f39df492-c64d-44f5-aaca-4fbe69095c65=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16ea8307a0ab69fb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1363/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-49a5ba02-daf3-4ffe-a229-bc3a5ad5687e off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-49a5ba02-daf3-4ffe-a229-bc3a5ad5687e STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f39df492-c64d-44f5-aaca-4fbe69095c65=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:51:22.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1363" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.167 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":4,"skipped":900,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:51:22.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 29 23:51:22.439: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 23:52:22.491: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:52:22.493: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:52:22.511: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:52:22.511: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:52:22.511: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:52:22.511: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:52:22.528: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:52:22.528: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:52:22.528: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.528: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:52:22.528: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:52:22.528: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:52:22.528: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:52:22.528: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.528: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:52:22.528: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:52:22.528: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:52:22.528: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:52:22.528: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:52:22.528: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.528: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:52:22.528: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:52:22.528: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:52:22.528: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.528: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.528: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:52:22.528: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:52:22.529: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Apr 29 23:52:22.545: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:52:22.545: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:52:22.545: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.545: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:52:22.545: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:52:22.545: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.545: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:52:22.545: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.545: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:52:22.545: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.545: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.545: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.545: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:52:22.546: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:52:22.546: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.546: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:52:22.546: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:52:22.546: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:52:22.546: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:52:22.546: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.546: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.546: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:52:22.546: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:52:22.546: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.546: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:52:22.546: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.546: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:52:22.546: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:52:22.546: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:52:22.546: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:52:22.546: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 Apr 29 23:52:22.561: INFO: Waiting for running... Apr 29 23:52:22.562: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:52:27.631: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:52:27.631: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:52:27.631: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:52:27.631: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:52:27.631: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:52:27.631: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:52:27.631: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:52:27.631: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:52:27.631: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:52:27.631: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: cac4cafc-494e-4c6f-88da-bca90c58b989-0, Cpu: 37563, Mem: 87680079872 Apr 29 23:52:27.631: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 29 23:52:27.631: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:52:27.631: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:52:27.631: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:52:27.631: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:52:27.631: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:52:27.631: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:52:27.631: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:52:27.631: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:52:27.631: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:52:27.631: INFO: Pod for on the node: 552810a7-eb12-442a-a181-cc95bb5dec0c-0, Cpu: 38013, Mem: 88949942272 Apr 29 23:52:27.631: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 29 23:52:27.631: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3712 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3712, will wait for the garbage collector to delete the pods Apr 29 23:52:33.812: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.710812ms Apr 29 23:52:33.913: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 101.194969ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:52:41.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3712" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:79.318 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":5,"skipped":1348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:52:41.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:52:41.763: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:52:41.771: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:52:41.774: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:52:41.784: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:52:41.784: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:52:41.784: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:52:41.784: INFO: Container discover ready: false, restart count 0 Apr 29 23:52:41.784: INFO: Container init ready: false, restart count 0 Apr 29 23:52:41.784: INFO: Container install ready: false, restart count 0 Apr 29 23:52:41.784: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:52:41.784: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:52:41.784: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:52:41.784: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:52:41.784: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:52:41.784: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:52:41.784: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:52:41.784: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:52:41.784: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:52:41.784: INFO: Container collectd ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:52:41.784: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:52:41.784: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:52:41.784: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:52:41.784: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container grafana ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:52:41.784: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.784: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:52:41.784: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:52:41.793: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:52:41.793: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:52:41.793: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:52:41.793: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:52:41.793: INFO: Container discover ready: false, restart count 0 Apr 29 23:52:41.793: INFO: Container init ready: false, restart count 0 Apr 29 23:52:41.793: INFO: Container install ready: false, restart count 0 Apr 29 23:52:41.793: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:52:41.793: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:52:41.793: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:52:41.793: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:52:41.793: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:52:41.793: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:52:41.793: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:52:41.793: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:52:41.793: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:52:41.793: INFO: Container collectd ready: true, restart count 0 Apr 29 23:52:41.793: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:52:41.793: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:52:41.793: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:52:41.793: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:52:41.793: INFO: Container node-exporter ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:52:59.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9379" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.171 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":6,"skipped":1603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:52:59.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:52:59.936: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:52:59.943: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:52:59.945: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:52:59.956: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:52:59.957: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:52:59.957: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:52:59.957: INFO: Container discover ready: false, restart count 0 Apr 29 23:52:59.957: INFO: Container init ready: false, restart count 0 Apr 29 23:52:59.957: INFO: Container install ready: false, restart count 0 Apr 29 23:52:59.957: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:52:59.957: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:52:59.957: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:52:59.957: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:52:59.957: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:52:59.957: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:52:59.957: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:52:59.957: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:52:59.957: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:52:59.957: INFO: Container collectd ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:52:59.957: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:52:59.957: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:52:59.957: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:52:59.957: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container grafana ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:52:59.957: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:52:59.957: INFO: rs-e2e-pts-filter-7w2w2 from sched-pred-9379 started at 2022-04-29 23:52:53 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 29 23:52:59.957: INFO: rs-e2e-pts-filter-d8v9k from sched-pred-9379 started at 2022-04-29 23:52:53 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.957: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 29 23:52:59.957: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:52:59.982: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:52:59.982: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:52:59.982: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:52:59.982: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:52:59.982: INFO: Container discover ready: false, restart count 0 Apr 29 23:52:59.982: INFO: Container init ready: false, restart count 0 Apr 29 23:52:59.982: INFO: Container install ready: false, restart count 0 Apr 29 23:52:59.982: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:52:59.982: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:52:59.982: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:52:59.982: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:52:59.982: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:52:59.982: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:52:59.982: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:52:59.982: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:52:59.982: INFO: Container collectd ready: true, restart count 0 Apr 29 23:52:59.982: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:52:59.982: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:52:59.982: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:52:59.982: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:52:59.982: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:52:59.982: INFO: rs-e2e-pts-filter-mll94 from sched-pred-9379 started at 2022-04-29 23:52:53 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 29 23:52:59.982: INFO: rs-e2e-pts-filter-n8whp from sched-pred-9379 started at 2022-04-29 23:52:53 +0000 UTC (1 container statuses recorded) Apr 29 23:52:59.982: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Apr 29 23:53:00.016: INFO: Pod cmk-74bh9 requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod cmk-f5znp requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod cmk-webhook-6c9d5f8578-b9mdv requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod kube-flannel-47phs requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod kube-flannel-dbcj8 requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod kube-multus-ds-amd64-7slcd requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod kube-multus-ds-amd64-kkz4q requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod kube-proxy-k6tv2 requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod kube-proxy-v9tgj requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod kubernetes-dashboard-785dcbb76d-d2k5n requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod kubernetes-metrics-scraper-5558854cb-g47c2 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod node-feature-discovery-worker-jtjjb requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod node-feature-discovery-worker-kbl9s requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod collectd-ccgw2 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod collectd-zxs8j requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod node-exporter-c8777 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod node-exporter-tlpmt requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-khdw5 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod rs-e2e-pts-filter-7w2w2 requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod rs-e2e-pts-filter-d8v9k requesting local ephemeral resource =0 on Node node1 Apr 29 23:53:00.016: INFO: Pod rs-e2e-pts-filter-mll94 requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Pod rs-e2e-pts-filter-n8whp requesting local ephemeral resource =0 on Node node2 Apr 29 23:53:00.016: INFO: Using pod capacity: 40608090249 Apr 29 23:53:00.016: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 Apr 29 23:53:00.016: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Apr 29 23:53:00.209: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ea831e69a7822a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-0 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ea831f62b576e6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ea831f89e32160], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 657.292558ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ea831f961d3622], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ea831fd7f4985f], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ea831e6a283ca2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ea831fd8a6562b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ea831fee035d9d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 358.409337ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ea831ffa9882a2], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ea83205715c12a], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ea831e6f31843a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ea83208adf3989], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ea8320d84a23af], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.298847276s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ea8320def20af0], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ea8320e57902bb], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ea831e6fc0e035], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ea832088ebbbcc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ea8320af14b746], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 640.213194ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ea8320b5fa3bda], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ea8320bc614a78], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ea831e704be4f5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ea8320483fef96], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ea8320a3839804], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.531149173s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ea8320b46fa4f4], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ea8320c4506420], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ea831e70e444c0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ea83204688a6af], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ea83205fc57a92], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 423.402021ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ea8320705c773c], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ea8320952ea750], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ea831e71769892], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ea831f5c57ec0b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ea831f70a9bb89], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 340.8964ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ea831f89d847ee], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ea831fbe332163], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ea831e720b0026], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ea83204692218c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ea832080d46fff], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 977.392867ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ea832098075b63], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ea8320aaacb4cc], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ea831e7298d239], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ea83208957df9e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ea8320cc59d498], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.124186335s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ea8320d87ea1c0], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ea8320dfcccd66], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ea831e731e920c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ea832046fad8b7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ea832091352955], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.245327512s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ea83209b855d36], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ea8320b58818c2], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ea831e73ada008], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ea8320518e1265], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ea8320b7af502c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.71344417s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ea8320bf1bfa9d], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ea8320d34a58ba], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ea831e744aa436], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ea832093f5ec1b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ea8320e211de17], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.31044234s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ea8320e949809c], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ea8320f1b498e9], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ea831e6aadb9cc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ea831ef212ed43], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ea831f0551408f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 322.843354ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ea831f3036275e], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ea831f96c269c9], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ea831e6b3aa975], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ea831f1fe1e959], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ea831f375500c1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 393.410983ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ea831f6351df75], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ea831fb1d2dfd5], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ea831e6bd93ee3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ea831f2bb12003], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ea831f521eab69], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 644.706143ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ea831f7de8b7ae], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ea831fdf8495dd], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ea831e6c6bce4b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ea832088d29316], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ea83209b49b6da], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 309.791618ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ea8320a4355a11], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ea8320b5ce71d6], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ea831e6cf644b2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ea831fdfdd0baa], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ea83201e921935], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.05204561s] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ea83204f66cbc9], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ea83209f03c324], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ea831e6d8f1f54], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ea8320010b1145], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ea832031eeb529], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 820.216841ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ea8320464c2961], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ea8320941f9f72], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ea831e6e1857c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ea83208aadf753], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ea8320c4df9c49], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 976.325985ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ea8320cb8f9aa3], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ea8320d2232421], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ea831e6ea68d04], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2273/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ea831fdfa72671], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ea832000f29d3d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 558.585355ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ea832016affcc3], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ea83208f3bf450], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16ea8321f6d3b1e2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:53:16.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2273" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.394 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":7,"skipped":1660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:53:16.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 29 23:53:16.345: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 23:54:16.399: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:54:16.401: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:54:16.420: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:54:16.420: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:54:16.420: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:54:16.420: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:54:16.441: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:54:16.441: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:54:16.441: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.441: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:54:16.441: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:54:16.441: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.441: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:54:16.441: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.441: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:54:16.441: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.441: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.441: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.441: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:54:16.441: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:54:16.441: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.441: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:54:16.441: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:54:16.441: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:54:16.441: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:54:16.441: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.442: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.442: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:54:16.442: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:54:16.442: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.442: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:54:16.442: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.442: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.442: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.442: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:54:16.442: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:54:16.442: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Apr 29 23:54:16.458: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:54:16.458: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:54:16.458: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.458: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:54:16.458: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:54:16.458: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.458: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:54:16.458: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.458: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:54:16.458: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.458: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.458: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.459: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:54:16.459: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:54:16.459: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.459: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:54:16.459: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:54:16.459: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:54:16.459: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:54:16.459: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.459: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.459: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:54:16.459: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:54:16.459: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.459: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:54:16.459: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.459: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:54:16.459: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:54:16.459: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:54:16.459: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:54:16.459: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 Apr 29 23:54:16.473: INFO: Waiting for running... Apr 29 23:54:16.476: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:54:21.545: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:54:21.545: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:54:21.545: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:54:21.545: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:54:21.545: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:54:21.545: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:54:21.545: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:54:21.545: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:54:21.545: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:54:21.545: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: bb783646-1b33-454b-a8b6-2955596df7bd-0, Cpu: 37563, Mem: 87680079872 Apr 29 23:54:21.545: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 29 23:54:21.545: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:54:21.545: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:54:21.545: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:54:21.545: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:54:21.545: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:54:21.545: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:54:21.545: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:54:21.545: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:54:21.545: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:54:21.545: INFO: Pod for on the node: 17d194a3-8ef5-4fc8-91ee-ab3db61c6a1a-0, Cpu: 38013, Mem: 88949942272 Apr 29 23:54:21.545: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 29 23:54:21.545: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-279bd938-8dae-4fc9-b9cb=testing-taint-value-347f59fb-5a13-4633-9e7a-4fc8213466a5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3d693d12-34c9-47f6-a7c0=testing-taint-value-4a18ff32-7084-4ba0-b761-9ba327c97ed3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-48f15f33-1768-4fdb-af67=testing-taint-value-e0717c16-7dac-40fb-b9e8-680609e34a65:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-41716e7c-33a2-48db-9471=testing-taint-value-d4625be1-2a48-4ca8-a94f-bd692ffe445e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2e3cd2d3-d41b-490e-b0e1=testing-taint-value-3e6e576d-a055-4ebf-833d-4994f20daaf7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-383262cb-7037-4f83-8f0b=testing-taint-value-cc45aca2-ca14-4895-9199-9c3d9ea3cee5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e50890ae-ff04-4b40-a55c=testing-taint-value-08cff78b-7b67-4c5a-8135-6e4bf19d9d3e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9d87c81e-1cc9-435e-b23e=testing-taint-value-56fc4853-7c9d-4c98-8f0f-3c48f4501b1e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7608dfc0-d4fe-468d-a6d1=testing-taint-value-4b71e5fc-94e1-4ca1-81e7-1929476226da:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b5b6f710-bd24-476d-838d=testing-taint-value-18cda978-487d-429d-9764-cf1aa1a049fc:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fb2d3bc3-db33-4357-98cc=testing-taint-value-e1d41312-6977-4b82-a7fb-176b367b7a40:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f166d0f6-f26c-44a2-8503=testing-taint-value-b29cff90-a7fe-4639-b134-5a6810b7de55:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2302d917-8b22-4a93-8898=testing-taint-value-19ac6c2f-6f39-4a2b-bf23-740577b25f97:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-62e82b2c-fb5b-41dd-bb08=testing-taint-value-8dac4bb3-1f13-4291-a938-9c6eb84d40e5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4c1a5d4e-a05c-4cee-b291=testing-taint-value-a13754b8-e6c6-4cc1-b7de-280e95ceaf77:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2c7a1063-8a39-44ab-b72a=testing-taint-value-b2e31340-1a55-4158-a59a-bcb8fbe2cfe6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e8a11c4f-59c7-4007-8856=testing-taint-value-fcc65db1-82bf-4522-9178-6458533628aa:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-114f5166-998f-4d06-9533=testing-taint-value-850ac506-089d-4323-a4ea-ab298601acff:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8c0d6b1c-bb6b-4bcb-ab2a=testing-taint-value-ba9eed01-f66b-49b8-9637-015fbed10c6e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3a009083-c19e-4448-9166=testing-taint-value-7de84697-198a-4bb1-ad7b-a59f19e52bbb:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fb2d3bc3-db33-4357-98cc=testing-taint-value-e1d41312-6977-4b82-a7fb-176b367b7a40:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f166d0f6-f26c-44a2-8503=testing-taint-value-b29cff90-a7fe-4639-b134-5a6810b7de55:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2302d917-8b22-4a93-8898=testing-taint-value-19ac6c2f-6f39-4a2b-bf23-740577b25f97:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-62e82b2c-fb5b-41dd-bb08=testing-taint-value-8dac4bb3-1f13-4291-a938-9c6eb84d40e5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4c1a5d4e-a05c-4cee-b291=testing-taint-value-a13754b8-e6c6-4cc1-b7de-280e95ceaf77:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2c7a1063-8a39-44ab-b72a=testing-taint-value-b2e31340-1a55-4158-a59a-bcb8fbe2cfe6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e8a11c4f-59c7-4007-8856=testing-taint-value-fcc65db1-82bf-4522-9178-6458533628aa:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-114f5166-998f-4d06-9533=testing-taint-value-850ac506-089d-4323-a4ea-ab298601acff:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8c0d6b1c-bb6b-4bcb-ab2a=testing-taint-value-ba9eed01-f66b-49b8-9637-015fbed10c6e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3a009083-c19e-4448-9166=testing-taint-value-7de84697-198a-4bb1-ad7b-a59f19e52bbb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-279bd938-8dae-4fc9-b9cb=testing-taint-value-347f59fb-5a13-4633-9e7a-4fc8213466a5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3d693d12-34c9-47f6-a7c0=testing-taint-value-4a18ff32-7084-4ba0-b761-9ba327c97ed3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-48f15f33-1768-4fdb-af67=testing-taint-value-e0717c16-7dac-40fb-b9e8-680609e34a65:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-41716e7c-33a2-48db-9471=testing-taint-value-d4625be1-2a48-4ca8-a94f-bd692ffe445e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2e3cd2d3-d41b-490e-b0e1=testing-taint-value-3e6e576d-a055-4ebf-833d-4994f20daaf7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-383262cb-7037-4f83-8f0b=testing-taint-value-cc45aca2-ca14-4895-9199-9c3d9ea3cee5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e50890ae-ff04-4b40-a55c=testing-taint-value-08cff78b-7b67-4c5a-8135-6e4bf19d9d3e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9d87c81e-1cc9-435e-b23e=testing-taint-value-56fc4853-7c9d-4c98-8f0f-3c48f4501b1e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7608dfc0-d4fe-468d-a6d1=testing-taint-value-4b71e5fc-94e1-4ca1-81e7-1929476226da:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b5b6f710-bd24-476d-838d=testing-taint-value-18cda978-487d-429d-9764-cf1aa1a049fc:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:54:30.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2143" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:74.583 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":8,"skipped":2958,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:54:30.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 29 23:54:30.942: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 23:55:30.996: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:56:07.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7225" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.375 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":9,"skipped":3455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:56:07.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 29 23:56:07.319: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 23:57:07.370: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:57:07.372: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:57:07.391: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:57:07.391: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:57:07.391: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:57:07.391: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:57:07.405: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:57:07.405: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:57:07.405: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:57:07.405: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:57:07.405: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:57:07.405: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:57:07.406: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:57:07.406: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:57:07.406: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:57:07.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:57:07.406: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:57:07.406: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:57:07.406: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:57:07.406: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:57:07.406: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:57:07.406: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:57:07.406: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:57:07.406: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:57:07.406: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:57:07.406: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:57:07.406: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:57:07.406: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Apr 29 23:57:15.507: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:57:15.507: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:57:15.507: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:57:15.507: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:57:15.507: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:57:15.507: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:57:15.507: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:57:15.507: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:57:15.507: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:57:15.507: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 Apr 29 23:57:15.507: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:57:15.507: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:57:15.507: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:57:15.507: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:57:15.507: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:57:15.507: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:57:15.507: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:57:15.507: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:57:15.507: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:57:15.507: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:57:15.507: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:57:15.507: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:57:15.507: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:57:15.519: INFO: Waiting for running... Apr 29 23:57:15.523: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:57:20.615: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:57:20.615: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:57:20.615: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:57:20.615: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:57:20.615: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:57:20.615: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:57:20.615: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:57:20.615: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: 1b5215ae-c589-4f04-80b0-ab35b993e328-0, Cpu: 38013, Mem: 88949942272 Apr 29 23:57:20.615: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 29 23:57:20.615: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:57:20.615: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:57:20.615: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:57:20.615: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:57:20.615: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:57:20.615: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:57:20.615: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:57:20.615: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:57:20.615: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:57:20.615: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:57:20.615: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:57:20.615: INFO: Pod for on the node: 02fbac8f-7223-4877-b045-efcde5076e80-0, Cpu: 37563, Mem: 87680079872 Apr 29 23:57:20.615: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 29 23:57:20.615: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:57:46.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-628" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:99.415 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":10,"skipped":3804,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:57:46.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:57:46.749: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:57:46.757: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:57:46.759: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:57:46.766: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:57:46.766: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:57:46.766: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:57:46.766: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:57:46.766: INFO: Container discover ready: false, restart count 0 Apr 29 23:57:46.766: INFO: Container init ready: false, restart count 0 Apr 29 23:57:46.766: INFO: Container install ready: false, restart count 0 Apr 29 23:57:46.766: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.766: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:57:46.766: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.766: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:57:46.766: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.766: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:57:46.766: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.766: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:57:46.766: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.766: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:57:46.766: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.766: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:57:46.767: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.767: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:57:46.767: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.767: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:57:46.767: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:57:46.767: INFO: Container collectd ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:57:46.767: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:57:46.767: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:57:46.767: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:57:46.767: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Container grafana ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:57:46.767: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.767: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:57:46.767: INFO: test-pod from sched-priority-628 started at 2022-04-29 23:57:28 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.767: INFO: Container test-pod ready: true, restart count 0 Apr 29 23:57:46.767: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:57:46.776: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:57:46.776: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:57:46.776: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:57:46.776: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:57:46.776: INFO: Container discover ready: false, restart count 0 Apr 29 23:57:46.776: INFO: Container init ready: false, restart count 0 Apr 29 23:57:46.776: INFO: Container install ready: false, restart count 0 Apr 29 23:57:46.776: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:57:46.776: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:57:46.776: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:57:46.776: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:57:46.776: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:57:46.776: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:57:46.776: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:57:46.776: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:57:46.776: INFO: Container collectd ready: true, restart count 0 Apr 29 23:57:46.776: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:57:46.776: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:57:46.776: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:57:46.776: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:57:46.776: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:57:46.776: INFO: rs-e2e-pts-score-2ml9j from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container e2e-pts-score ready: true, restart count 0 Apr 29 23:57:46.776: INFO: rs-e2e-pts-score-7s5kx from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container e2e-pts-score ready: true, restart count 0 Apr 29 23:57:46.776: INFO: rs-e2e-pts-score-lt4ph from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container e2e-pts-score ready: true, restart count 0 Apr 29 23:57:46.776: INFO: rs-e2e-pts-score-xh2h2 from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:46.776: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-422acabe-25a5-4656-9d2d-9379e17316f3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-422acabe-25a5-4656-9d2d-9379e17316f3 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-422acabe-25a5-4656-9d2d-9379e17316f3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:57:54.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3705" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.140 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":11,"skipped":4498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:57:54.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 29 23:57:54.896: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 29 23:57:54.904: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:57:54.907: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 29 23:57:54.914: INFO: cmk-f5znp from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:57:54.914: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:57:54.914: INFO: cmk-init-discover-node1-gxlbt from kube-system started at 2022-04-29 20:11:43 +0000 UTC (3 container statuses recorded) Apr 29 23:57:54.914: INFO: Container discover ready: false, restart count 0 Apr 29 23:57:54.914: INFO: Container init ready: false, restart count 0 Apr 29 23:57:54.914: INFO: Container install ready: false, restart count 0 Apr 29 23:57:54.914: INFO: kube-flannel-47phs from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kube-flannel ready: true, restart count 2 Apr 29 23:57:54.914: INFO: kube-multus-ds-amd64-kkz4q from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:57:54.914: INFO: kube-proxy-v9tgj from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:57:54.914: INFO: kubernetes-dashboard-785dcbb76d-d2k5n from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 29 23:57:54.914: INFO: kubernetes-metrics-scraper-5558854cb-g47c2 from kube-system started at 2022-04-29 20:00:45 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 29 23:57:54.914: INFO: nginx-proxy-node1 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:57:54.914: INFO: node-feature-discovery-worker-kbl9s from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:57:54.914: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:57:54.914: INFO: collectd-ccgw2 from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:57:54.914: INFO: Container collectd ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:57:54.914: INFO: node-exporter-c8777 from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:57:54.914: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:57:54.914: INFO: prometheus-k8s-0 from monitoring started at 2022-04-29 20:13:38 +0000 UTC (4 container statuses recorded) Apr 29 23:57:54.914: INFO: Container config-reloader ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container grafana ready: true, restart count 0 Apr 29 23:57:54.914: INFO: Container prometheus ready: true, restart count 1 Apr 29 23:57:54.914: INFO: tas-telemetry-aware-scheduling-84ff454dfb-khdw5 from monitoring started at 2022-04-29 20:16:34 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container tas-extender ready: true, restart count 0 Apr 29 23:57:54.914: INFO: with-labels from sched-pred-3705 started at 2022-04-29 23:57:50 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container with-labels ready: true, restart count 0 Apr 29 23:57:54.914: INFO: test-pod from sched-priority-628 started at 2022-04-29 23:57:28 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.914: INFO: Container test-pod ready: false, restart count 0 Apr 29 23:57:54.914: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 29 23:57:54.924: INFO: cmk-74bh9 from kube-system started at 2022-04-29 20:12:25 +0000 UTC (2 container statuses recorded) Apr 29 23:57:54.924: INFO: Container nodereport ready: true, restart count 0 Apr 29 23:57:54.924: INFO: Container reconcile ready: true, restart count 0 Apr 29 23:57:54.924: INFO: cmk-init-discover-node2-csdn7 from kube-system started at 2022-04-29 20:12:03 +0000 UTC (3 container statuses recorded) Apr 29 23:57:54.924: INFO: Container discover ready: false, restart count 0 Apr 29 23:57:54.924: INFO: Container init ready: false, restart count 0 Apr 29 23:57:54.924: INFO: Container install ready: false, restart count 0 Apr 29 23:57:54.924: INFO: cmk-webhook-6c9d5f8578-b9mdv from kube-system started at 2022-04-29 20:12:26 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container cmk-webhook ready: true, restart count 0 Apr 29 23:57:54.924: INFO: kube-flannel-dbcj8 from kube-system started at 2022-04-29 20:00:03 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container kube-flannel ready: true, restart count 3 Apr 29 23:57:54.924: INFO: kube-multus-ds-amd64-7slcd from kube-system started at 2022-04-29 20:00:12 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container kube-multus ready: true, restart count 1 Apr 29 23:57:54.924: INFO: kube-proxy-k6tv2 from kube-system started at 2022-04-29 19:59:08 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container kube-proxy ready: true, restart count 2 Apr 29 23:57:54.924: INFO: nginx-proxy-node2 from kube-system started at 2022-04-29 19:59:05 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container nginx-proxy ready: true, restart count 2 Apr 29 23:57:54.924: INFO: node-feature-discovery-worker-jtjjb from kube-system started at 2022-04-29 20:08:04 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container nfd-worker ready: true, restart count 0 Apr 29 23:57:54.924: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5 from kube-system started at 2022-04-29 20:09:17 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 29 23:57:54.924: INFO: collectd-zxs8j from monitoring started at 2022-04-29 20:17:24 +0000 UTC (3 container statuses recorded) Apr 29 23:57:54.924: INFO: Container collectd ready: true, restart count 0 Apr 29 23:57:54.924: INFO: Container collectd-exporter ready: true, restart count 0 Apr 29 23:57:54.924: INFO: Container rbac-proxy ready: true, restart count 0 Apr 29 23:57:54.924: INFO: node-exporter-tlpmt from monitoring started at 2022-04-29 20:13:28 +0000 UTC (2 container statuses recorded) Apr 29 23:57:54.924: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 29 23:57:54.924: INFO: Container node-exporter ready: true, restart count 0 Apr 29 23:57:54.924: INFO: rs-e2e-pts-score-2ml9j from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container e2e-pts-score ready: true, restart count 0 Apr 29 23:57:54.924: INFO: rs-e2e-pts-score-7s5kx from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container e2e-pts-score ready: true, restart count 0 Apr 29 23:57:54.924: INFO: rs-e2e-pts-score-lt4ph from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container e2e-pts-score ready: true, restart count 0 Apr 29 23:57:54.924: INFO: rs-e2e-pts-score-xh2h2 from sched-priority-628 started at 2022-04-29 23:57:20 +0000 UTC (1 container statuses recorded) Apr 29 23:57:54.924: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4dc5f9ff-0d56-4e3b-897d-e836c0f2e2be 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-4dc5f9ff-0d56-4e3b-897d-e836c0f2e2be off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-4dc5f9ff-0d56-4e3b-897d-e836c0f2e2be [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:58:11.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7951" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":12,"skipped":5268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:58:11.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 29 23:58:11.067: INFO: Waiting up to 1m0s for all nodes to be ready Apr 29 23:59:11.123: INFO: Waiting for terminating namespaces to be deleted... Apr 29 23:59:11.125: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:59:11.144: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:59:11.144: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:59:11.144: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:59:11.144: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:59:11.159: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:59:11.159: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:59:11.159: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:59:11.159: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:59:11.159: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:59:11.159: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:59:11.159: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:59:11.159: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:59:11.159: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:59:11.159: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:59:11.159: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:59:11.159: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:59:11.159: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:59:11.159: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:59:11.159: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:59:11.159: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:59:11.159: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:59:11.159: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:59:11.159: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:59:11.159: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:59:11.159: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:59:11.159: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Apr 29 23:59:15.206: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:59:15.206: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:59:15.206: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:59:15.206: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:59:15.206: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:59:15.206: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:59:15.206: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:59:15.206: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:59:15.206: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:59:15.206: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Apr 29 23:59:15.206: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Apr 29 23:59:15.206: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:59:15.206: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:59:15.206: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:59:15.206: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:59:15.206: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:59:15.206: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:59:15.206: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:59:15.206: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:59:15.206: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 29 23:59:15.206: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Apr 29 23:59:15.206: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 Apr 29 23:59:15.218: INFO: Waiting for running... Apr 29 23:59:15.221: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:59:25.290: INFO: ComputeCPUMemFraction for node: node1 Apr 29 23:59:25.290: INFO: Pod for on the node: cmk-f5znp, Cpu: 200, Mem: 419430400 Apr 29 23:59:25.290: INFO: Pod for on the node: cmk-init-discover-node1-gxlbt, Cpu: 300, Mem: 629145600 Apr 29 23:59:25.290: INFO: Pod for on the node: kube-flannel-47phs, Cpu: 150, Mem: 64000000 Apr 29 23:59:25.290: INFO: Pod for on the node: kube-multus-ds-amd64-kkz4q, Cpu: 100, Mem: 94371840 Apr 29 23:59:25.290: INFO: Pod for on the node: kube-proxy-v9tgj, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-d2k5n, Cpu: 50, Mem: 64000000 Apr 29 23:59:25.290: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-g47c2, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 29 23:59:25.290: INFO: Pod for on the node: node-feature-discovery-worker-kbl9s, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2fslq, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: collectd-ccgw2, Cpu: 300, Mem: 629145600 Apr 29 23:59:25.290: INFO: Pod for on the node: node-exporter-c8777, Cpu: 112, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 29 23:59:25.290: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-khdw5, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: fa8ffed3-e93f-4b12-b1d8-5091b8d41969-0, Cpu: 45263, Mem: 105568540672 Apr 29 23:59:25.290: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Apr 29 23:59:25.290: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 29 23:59:25.290: INFO: ComputeCPUMemFraction for node: node2 Apr 29 23:59:25.290: INFO: Pod for on the node: cmk-74bh9, Cpu: 200, Mem: 419430400 Apr 29 23:59:25.290: INFO: Pod for on the node: cmk-init-discover-node2-csdn7, Cpu: 300, Mem: 629145600 Apr 29 23:59:25.290: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-b9mdv, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: kube-flannel-dbcj8, Cpu: 150, Mem: 64000000 Apr 29 23:59:25.290: INFO: Pod for on the node: kube-multus-ds-amd64-7slcd, Cpu: 100, Mem: 94371840 Apr 29 23:59:25.290: INFO: Pod for on the node: kube-proxy-k6tv2, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 29 23:59:25.290: INFO: Pod for on the node: node-feature-discovery-worker-jtjjb, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zfdv5, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: collectd-zxs8j, Cpu: 300, Mem: 629145600 Apr 29 23:59:25.290: INFO: Pod for on the node: node-exporter-tlpmt, Cpu: 112, Mem: 209715200 Apr 29 23:59:25.290: INFO: Pod for on the node: f80366ff-c0e2-4240-aa78-073b8aeb92ae-0, Cpu: 45713, Mem: 106838403072 Apr 29 23:59:25.290: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 29 23:59:25.290: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Apr 29 23:59:25.290: INFO: Node: node2, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:59:35.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9208" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:84.287 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":13,"skipped":5405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 29 23:59:35.340: INFO: Running AfterSuite actions on all nodes Apr 29 23:59:35.340: INFO: Running AfterSuite actions on node 1 Apr 29 23:59:35.340: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 527.755 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 8m49.089811568s Test Suite Passed