I1023 05:31:46.742630 21 e2e.go:129] Starting e2e run "90afca12-9d29-4767-9a30-c40e856173f2" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634967105 - Will randomize all specs Will run 13 of 5770 specs Oct 23 05:31:46.757: INFO: >>> kubeConfig: /root/.kube/config Oct 23 05:31:46.762: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 05:31:46.793: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 05:31:46.855: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 05:31:46.855: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 05:31:46.855: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 05:31:46.855: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 05:31:46.855: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 05:31:46.872: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 05:31:46.872: INFO: e2e test version: v1.21.5 Oct 23 05:31:46.873: INFO: kube-apiserver version: v1.21.1 Oct 23 05:31:46.873: INFO: >>> kubeConfig: /root/.kube/config Oct 23 05:31:46.878: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:31:46.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1023 05:31:46.908517 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 05:31:46.908: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 05:31:46.912: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:31:46.914: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:31:46.921: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:31:46.925: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:31:46.936: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:31:46.936: INFO: Container discover ready: false, restart count 0 Oct 23 05:31:46.936: INFO: Container init ready: false, restart count 0 Oct 23 05:31:46.936: INFO: Container install ready: false, restart count 0 Oct 23 05:31:46.936: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:31:46.936: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:31:46.936: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:31:46.936: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:31:46.936: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:31:46.936: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:31:46.936: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:31:46.936: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:31:46.936: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:31:46.936: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:31:46.936: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:31:46.936: INFO: Container collectd ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:31:46.936: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:31:46.936: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:31:46.936: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container grafana ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:31:46.936: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:31:46.936: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:31:46.936: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:31:46.950: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:31:46.950: INFO: Container discover ready: false, restart count 0 Oct 23 05:31:46.950: INFO: Container init ready: false, restart count 0 Oct 23 05:31:46.950: INFO: Container install ready: false, restart count 0 Oct 23 05:31:46.950: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:31:46.950: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:31:46.950: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:31:46.950: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.950: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:31:46.950: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.950: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:31:46.950: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.950: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:31:46.950: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.950: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:31:46.950: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.950: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:31:46.950: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.951: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:31:46.951: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.951: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:31:46.951: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:31:46.951: INFO: Container collectd ready: true, restart count 0 Oct 23 05:31:46.951: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:31:46.951: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:31:46.951: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:31:46.951: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:31:46.951: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:31:46.951: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:31:46.951: INFO: Container tas-extender ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Oct 23 05:31:46.984: INFO: Pod cmk-kn29k requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod cmk-t9r2t requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod cmk-webhook-6c9d5f8578-pkwhc requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod kube-flannel-2cdvd requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod kube-flannel-xx6ls requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod kube-multus-ds-amd64-fww5b requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod kube-multus-ds-amd64-l97s4 requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod kube-proxy-5h2bl requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod kube-proxy-m9z8s requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod kubernetes-dashboard-785dcbb76d-kc4kh requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod kubernetes-metrics-scraper-5558854cb-dfn2n requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod node-feature-discovery-worker-2pvq5 requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod node-feature-discovery-worker-8k8m5 requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.984: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.984: INFO: Pod collectd-n9sbv requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.985: INFO: Pod collectd-xhdgw requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.985: INFO: Pod node-exporter-fjc79 requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.985: INFO: Pod node-exporter-v656r requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.985: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.985: INFO: Pod prometheus-operator-585ccfb458-hwjk2 requesting local ephemeral resource =0 on Node node1 Oct 23 05:31:46.985: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-gltgg requesting local ephemeral resource =0 on Node node2 Oct 23 05:31:46.985: INFO: Using pod capacity: 40542413347 Oct 23 05:31:46.985: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Oct 23 05:31:46.985: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Oct 23 05:31:47.169: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b091ebe0032bb7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b091ed385c0f7c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b091ed87653bdd], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.325996095s] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b091ed93e7a872], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b091ede65bbc52], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b091ebe086ca6e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b091eced656e10], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b091ed05b64a6a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 407.945745ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b091ed145c3939], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b091ed48443b9d], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b091ebe541f370], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b091ee86621f15], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b091eedf170a56], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.488246946s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b091eee5c259e1], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b091eeec6bab83], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b091ebe5d35b8a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b091eddbc61ee0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b091edf0df79f8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 353.977033ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b091ee0d645360], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b091ee61258ce8], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b091ebe659dfe8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b091ee75b3748a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b091ee87b1e972], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 301.883321ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b091ee91dc15a8], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b091ee98aeb012], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b091ebe6fe982f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b091edf8eb730c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b091ee4800b863], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.32678714s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b091ee4eb560e0], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b091ee55783921], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b091ebe7787885], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b091edf5f30897], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b091ee09f01fc5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 335.346508ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b091ee10a821a6], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b091ee184ab2bd], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b091ebe80cd75e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b091ed12a4f0fd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b091ed359e25c0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 586.748883ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b091ed495d6135], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b091ed589f300e], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b091ebe8939eeb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b091edc3054b51], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b091edd5903bc5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 311.090163ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b091edf8b0ede1], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b091ee0787543a], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b091ebe9213e6a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b091edf7a670c7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b091ee34ac6374], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.023793673s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b091ee3bd1ed5c], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b091ee42e65742], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b091ebe9b2e0ca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b091edf7924f9c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b091ee1b0cbd6f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 595.220525ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b091ee2220a3e9], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b091ee29b010a0], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b091ebea4c5e94], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b091ed47b4d57a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b091ed5bae7cd9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 335.12011ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b091ed76a438fb], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b091edc8fde6cf], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b091ebe1011bd2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b091ed3530823d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b091ed72b2a894], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.03193399s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b091ed911793fd], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b091ede61fd3a3], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b091ebe17aecd1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b091ee86095a25], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b091eec764fb57], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.096512799s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b091eecef13c0a], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b091eed5c22d2f], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b091ebe20cf198], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b091ecaf8aa5a9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b091ecc905266e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 427.441849ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b091ecece9514a], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b091ed82442dd6], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b091ebe28c23f9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-5 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b091ecedaaa835], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b091ed1e376a25], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 814.522361ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b091ed2509dba1], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b091ed4b69e1f4], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b091ebe322c7ce], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-6 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b091ec6d1207c9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b091eca3a485e7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 915.561584ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b091ecdef4fd34], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b091ed20fb7912], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b091ebe3a4c8ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b091ee83598ae4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b091ee9e3103b6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 450.324097ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b091eea4d92ab1], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b091eeabca7205], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b091ebe420aaca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b091ee86a4f710], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b091eef3b46ce1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.829724842s] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b091eefa1a5389], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b091ef00db898c], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b091ebe4a796eb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-257/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b091ee860790c2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b091eeb32167c4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 756.661679ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b091eeba42439c], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b091eec1494db3], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b091ef6c9d3b4a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:32:03.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-257" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.385 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":1,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:32:03.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 05:32:03.292: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 05:33:03.344: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:33:03.346: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 05:33:03.365: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 05:33:03.365: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 05:33:03.365: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 05:33:03.365: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 05:33:03.382: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:33:03.382: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:33:03.382: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.382: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:33:03.382: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Oct 23 05:33:03.402: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.402: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:33:03.403: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:33:03.403: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:33:03.403: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:33:03.403: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 05:33:03.418: INFO: Waiting for running... Oct 23 05:33:03.423: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:33:08.492: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 05:33:08.492: INFO: Node: node1, totalRequestedMemResource: 1340355450880, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:33:08.492: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Pod for on the node: f9d6f8ce-8c73-45d2-bc72-02d6b12b894b-0, Cpu: 38400, Mem: 89350039552 Oct 23 05:33:08.492: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 05:33:08.492: INFO: Node: node2, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1847 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1847, will wait for the garbage collector to delete the pods Oct 23 05:33:14.681: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.053404ms Oct 23 05:33:14.782: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 101.197613ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:33:24.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1847" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:81.352 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":2,"skipped":366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:33:24.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:33:24.657: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:33:24.666: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:33:24.668: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:33:24.675: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:33:24.675: INFO: Container discover ready: false, restart count 0 Oct 23 05:33:24.675: INFO: Container init ready: false, restart count 0 Oct 23 05:33:24.675: INFO: Container install ready: false, restart count 0 Oct 23 05:33:24.675: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:33:24.675: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:33:24.675: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:33:24.675: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:33:24.675: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:33:24.675: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:33:24.675: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:33:24.675: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:33:24.675: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:33:24.675: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:33:24.675: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:33:24.675: INFO: Container collectd ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:33:24.675: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:33:24.675: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:33:24.675: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container grafana ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:33:24.675: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:33:24.675: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:33:24.675: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:33:24.684: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:33:24.684: INFO: Container discover ready: false, restart count 0 Oct 23 05:33:24.684: INFO: Container init ready: false, restart count 0 Oct 23 05:33:24.684: INFO: Container install ready: false, restart count 0 Oct 23 05:33:24.684: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:33:24.684: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:33:24.684: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:33:24.684: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:33:24.684: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:33:24.684: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:33:24.684: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:33:24.684: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:33:24.684: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:33:24.684: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:33:24.684: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:33:24.684: INFO: Container collectd ready: true, restart count 0 Oct 23 05:33:24.684: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:33:24.684: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:33:24.684: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:33:24.684: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:33:24.684: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:33:24.684: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:33:24.684: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b0920391ac0e29], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b09203eec89f77], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b092055b072f11], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9935/filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b09205b4ef21e9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b09205c534f31d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 273.004546ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b09205cbfda3f3], Reason = [Created], Message = [Created container filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815] STEP: Considering event: Type = [Normal], Name = [filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815.16b09205d2e53a12], Reason = [Started], Message = [Started container filler-pod-28a90d4d-e138-40d1-97b5-eb78a4e2c815] STEP: Considering event: Type = [Normal], Name = [without-label.16b09202a0bc9c9a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9935/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16b09202fe95790b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b0920310d08a80], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 305.853751ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b0920317eecf6d], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b092031ec1a529], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b0920390f4a21c], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.16b0920394d844db], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-7x29r" : object "sched-pred-9935"/"kube-root-ca.crt" not registered] STEP: Considering event: Type = [Warning], Name = [additional-podebde1ca7-0520-476c-820d-685174b0bea4.16b092065e65412f], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:33:41.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9935" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.188 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":3,"skipped":1274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:33:41.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 05:33:41.845: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 05:34:41.894: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:34:41.896: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 05:34:41.913: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 05:34:41.913: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 05:34:41.913: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 05:34:41.913: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 05:34:41.937: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:34:41.937: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:34:41.937: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:41.937: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:34:41.937: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Oct 23 05:34:54.035: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:34:54.035: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 05:34:54.035: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:34:54.035: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:34:54.035: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:34:54.046: INFO: Waiting for running... Oct 23 05:34:54.049: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:34:59.119: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 05:34:59.119: INFO: Node: node2, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:34:59.119: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Pod for on the node: d2f5ef33-9460-452c-b06b-b744f6838e6c-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:34:59.119: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 05:34:59.119: INFO: Node: node1, totalRequestedMemResource: 1340355481600, memAllocatableVal: 178884632576, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:35:15.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1390" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:93.399 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":4,"skipped":1455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:35:15.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:35:15.257: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:35:15.266: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:35:15.268: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:35:15.277: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:35:15.278: INFO: Container discover ready: false, restart count 0 Oct 23 05:35:15.278: INFO: Container init ready: false, restart count 0 Oct 23 05:35:15.278: INFO: Container install ready: false, restart count 0 Oct 23 05:35:15.278: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:35:15.278: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:35:15.278: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:35:15.278: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:35:15.278: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:35:15.278: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:35:15.278: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:35:15.278: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:35:15.278: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:35:15.278: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:35:15.278: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:35:15.278: INFO: Container collectd ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:35:15.278: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:35:15.278: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:35:15.278: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container grafana ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:35:15.278: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:35:15.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:35:15.278: INFO: test-pod from sched-priority-1390 started at 2021-10-23 05:35:05 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.278: INFO: Container test-pod ready: true, restart count 0 Oct 23 05:35:15.278: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:35:15.285: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:35:15.285: INFO: Container discover ready: false, restart count 0 Oct 23 05:35:15.285: INFO: Container init ready: false, restart count 0 Oct 23 05:35:15.285: INFO: Container install ready: false, restart count 0 Oct 23 05:35:15.285: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:35:15.285: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:35:15.285: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:35:15.285: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:35:15.285: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:35:15.285: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:35:15.285: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:35:15.285: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:35:15.285: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:35:15.285: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:35:15.285: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:35:15.285: INFO: Container collectd ready: true, restart count 0 Oct 23 05:35:15.285: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:35:15.285: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:35:15.285: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:35:15.285: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:15.285: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:35:15.285: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container tas-extender ready: true, restart count 0 Oct 23 05:35:15.285: INFO: rs-e2e-pts-score-ff64z from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 23 05:35:15.285: INFO: rs-e2e-pts-score-g4qgn from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 23 05:35:15.285: INFO: rs-e2e-pts-score-mhkgl from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.285: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 23 05:35:15.285: INFO: rs-e2e-pts-score-ntg9m from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:15.286: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b0921c612b7571], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:35:16.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3771" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":5,"skipped":2361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:35:16.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:35:16.376: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:35:16.385: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:35:16.387: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:35:16.397: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:35:16.397: INFO: Container discover ready: false, restart count 0 Oct 23 05:35:16.398: INFO: Container init ready: false, restart count 0 Oct 23 05:35:16.398: INFO: Container install ready: false, restart count 0 Oct 23 05:35:16.398: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:35:16.398: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:35:16.398: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:35:16.398: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:35:16.398: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:35:16.398: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:35:16.398: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:35:16.398: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:35:16.398: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:35:16.398: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:35:16.398: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:35:16.398: INFO: Container collectd ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:35:16.398: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:35:16.398: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:35:16.398: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container grafana ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:35:16.398: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:35:16.398: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:35:16.398: INFO: test-pod from sched-priority-1390 started at 2021-10-23 05:35:05 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.398: INFO: Container test-pod ready: true, restart count 0 Oct 23 05:35:16.398: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:35:16.407: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:35:16.407: INFO: Container discover ready: false, restart count 0 Oct 23 05:35:16.407: INFO: Container init ready: false, restart count 0 Oct 23 05:35:16.407: INFO: Container install ready: false, restart count 0 Oct 23 05:35:16.407: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:35:16.407: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:35:16.407: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:35:16.407: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.407: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:35:16.407: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.407: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:35:16.408: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:35:16.408: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:35:16.408: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:35:16.408: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:35:16.408: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:35:16.408: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:35:16.408: INFO: Container collectd ready: true, restart count 0 Oct 23 05:35:16.408: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:35:16.408: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:35:16.408: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:35:16.408: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:16.408: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:35:16.408: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container tas-extender ready: true, restart count 0 Oct 23 05:35:16.408: INFO: rs-e2e-pts-score-ff64z from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 23 05:35:16.408: INFO: rs-e2e-pts-score-g4qgn from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 23 05:35:16.408: INFO: rs-e2e-pts-score-mhkgl from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 23 05:35:16.408: INFO: rs-e2e-pts-score-ntg9m from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:16.408: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c8f5c7b0-a4a5-4d40-b43a-8ab4b897b9cf 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-c8f5c7b0-a4a5-4d40-b43a-8ab4b897b9cf off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c8f5c7b0-a4a5-4d40-b43a-8ab4b897b9cf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:35:32.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9908" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.191 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":6,"skipped":3571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:35:32.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:35:32.569: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:35:32.577: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:35:32.580: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:35:32.588: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:35:32.588: INFO: Container discover ready: false, restart count 0 Oct 23 05:35:32.588: INFO: Container init ready: false, restart count 0 Oct 23 05:35:32.588: INFO: Container install ready: false, restart count 0 Oct 23 05:35:32.588: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:35:32.588: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:35:32.588: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:35:32.588: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.588: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:35:32.588: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.588: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:35:32.588: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.588: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:35:32.588: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.588: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:35:32.588: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.588: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:35:32.588: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.588: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:35:32.589: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.589: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:35:32.589: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.589: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:35:32.589: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:35:32.589: INFO: Container collectd ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:35:32.589: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:35:32.589: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:35:32.589: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:35:32.589: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container grafana ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:35:32.589: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:35:32.589: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:32.589: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:35:32.589: INFO: pod1 from sched-pred-9908 started at 2021-10-23 05:35:20 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.589: INFO: Container agnhost ready: true, restart count 0 Oct 23 05:35:32.589: INFO: pod2 from sched-pred-9908 started at 2021-10-23 05:35:24 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.589: INFO: Container agnhost ready: true, restart count 0 Oct 23 05:35:32.589: INFO: pod3 from sched-pred-9908 started at 2021-10-23 05:35:28 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.589: INFO: Container agnhost ready: true, restart count 0 Oct 23 05:35:32.589: INFO: test-pod from sched-priority-1390 started at 2021-10-23 05:35:05 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.589: INFO: Container test-pod ready: false, restart count 0 Oct 23 05:35:32.589: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:35:32.605: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:35:32.605: INFO: Container discover ready: false, restart count 0 Oct 23 05:35:32.605: INFO: Container init ready: false, restart count 0 Oct 23 05:35:32.605: INFO: Container install ready: false, restart count 0 Oct 23 05:35:32.605: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:35:32.605: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:35:32.605: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:35:32.605: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:35:32.605: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:35:32.605: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:35:32.605: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:35:32.605: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:35:32.605: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:35:32.605: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.605: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:35:32.605: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:35:32.605: INFO: Container collectd ready: true, restart count 0 Oct 23 05:35:32.605: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:35:32.605: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:35:32.605: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:35:32.605: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:35:32.605: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:35:32.605: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.606: INFO: Container tas-extender ready: true, restart count 0 Oct 23 05:35:32.606: INFO: rs-e2e-pts-score-ff64z from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.606: INFO: Container e2e-pts-score ready: false, restart count 0 Oct 23 05:35:32.606: INFO: rs-e2e-pts-score-g4qgn from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.606: INFO: Container e2e-pts-score ready: false, restart count 0 Oct 23 05:35:32.606: INFO: rs-e2e-pts-score-mhkgl from sched-priority-1390 started at 2021-10-23 05:34:59 +0000 UTC (1 container statuses recorded) Oct 23 05:35:32.606: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-65676f16-3113-4305-ab63-7c234441e015=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-188e66d9-a1b3-4aab-a4d7-51cf3d59404c testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-188e66d9-a1b3-4aab-a4d7-51cf3d59404c off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-188e66d9-a1b3-4aab-a4d7-51cf3d59404c STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-65676f16-3113-4305-ab63-7c234441e015=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:35:40.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9677" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.175 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":7,"skipped":3811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:35:40.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 05:35:40.747: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 05:36:40.803: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:36:40.805: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 05:36:40.829: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 05:36:40.829: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 05:36:40.829: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 05:36:40.829: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 05:36:40.846: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:36:40.846: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:36:40.846: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.846: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.847: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.847: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.847: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:36:40.847: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:36:40.847: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Oct 23 05:36:44.887: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:36:44.887: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:36:44.887: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.887: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:44.888: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:36:44.888: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 05:36:44.899: INFO: Waiting for running... Oct 23 05:36:44.902: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:36:49.972: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:36:49.972: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:36:49.972: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 23 05:36:49.972: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:36:49.972: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:37:06.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9283" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:85.300 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":8,"skipped":4016,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:37:06.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 23 05:37:06.065: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 05:38:06.115: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:38:44.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1553" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:98.402 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":9,"skipped":4529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:38:44.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:38:44.460: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:38:44.468: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:38:44.470: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:38:44.478: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:38:44.478: INFO: Container discover ready: false, restart count 0 Oct 23 05:38:44.478: INFO: Container init ready: false, restart count 0 Oct 23 05:38:44.478: INFO: Container install ready: false, restart count 0 Oct 23 05:38:44.478: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:38:44.478: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:38:44.478: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:38:44.478: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:38:44.478: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:38:44.478: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:38:44.478: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:38:44.478: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:38:44.478: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:38:44.478: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:38:44.478: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:38:44.478: INFO: Container collectd ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:38:44.478: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:38:44.478: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:38:44.478: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container grafana ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:38:44.478: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:38:44.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:38:44.478: INFO: low-1 from sched-preemption-1553 started at 2021-10-23 05:38:18 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container low-1 ready: true, restart count 0 Oct 23 05:38:44.478: INFO: medium from sched-preemption-1553 started at 2021-10-23 05:38:35 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.478: INFO: Container medium ready: true, restart count 0 Oct 23 05:38:44.478: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:38:44.488: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:38:44.488: INFO: Container discover ready: false, restart count 0 Oct 23 05:38:44.488: INFO: Container init ready: false, restart count 0 Oct 23 05:38:44.488: INFO: Container install ready: false, restart count 0 Oct 23 05:38:44.488: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:38:44.488: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:38:44.488: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:38:44.488: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:38:44.488: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:38:44.488: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:38:44.488: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:38:44.488: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:38:44.488: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:38:44.488: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:38:44.488: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:38:44.488: INFO: Container collectd ready: true, restart count 0 Oct 23 05:38:44.488: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:38:44.488: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:38:44.488: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:38:44.488: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:38:44.488: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:38:44.488: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container tas-extender ready: true, restart count 0 Oct 23 05:38:44.488: INFO: high from sched-preemption-1553 started at 2021-10-23 05:38:14 +0000 UTC (1 container statuses recorded) Oct 23 05:38:44.488: INFO: Container high ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:38:56.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3272" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.176 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":10,"skipped":4651,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:38:56.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:38:56.634: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:38:56.642: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:38:56.646: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:38:56.655: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:38:56.655: INFO: Container discover ready: false, restart count 0 Oct 23 05:38:56.655: INFO: Container init ready: false, restart count 0 Oct 23 05:38:56.655: INFO: Container install ready: false, restart count 0 Oct 23 05:38:56.655: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:38:56.655: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:38:56.655: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:38:56.655: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:38:56.655: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:38:56.655: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:38:56.655: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:38:56.655: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:38:56.655: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:38:56.655: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:38:56.655: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:38:56.655: INFO: Container collectd ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:38:56.655: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:38:56.655: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:38:56.655: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container grafana ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:38:56.655: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:38:56.655: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:38:56.655: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:38:56.655: INFO: rs-e2e-pts-filter-96g9h from sched-pred-3272 started at 2021-10-23 05:38:52 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 05:38:56.655: INFO: rs-e2e-pts-filter-rjggq from sched-pred-3272 started at 2021-10-23 05:38:52 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 05:38:56.655: INFO: low-1 from sched-preemption-1553 started at 2021-10-23 05:38:18 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container low-1 ready: false, restart count 0 Oct 23 05:38:56.655: INFO: medium from sched-preemption-1553 started at 2021-10-23 05:38:35 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.655: INFO: Container medium ready: false, restart count 0 Oct 23 05:38:56.655: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:38:56.665: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:38:56.665: INFO: Container discover ready: false, restart count 0 Oct 23 05:38:56.665: INFO: Container init ready: false, restart count 0 Oct 23 05:38:56.665: INFO: Container install ready: false, restart count 0 Oct 23 05:38:56.665: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:38:56.665: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:38:56.665: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:38:56.665: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:38:56.665: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:38:56.665: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:38:56.665: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:38:56.665: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:38:56.665: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:38:56.665: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:38:56.665: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:38:56.665: INFO: Container collectd ready: true, restart count 0 Oct 23 05:38:56.665: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:38:56.665: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:38:56.665: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:38:56.665: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:38:56.665: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:38:56.665: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container tas-extender ready: true, restart count 0 Oct 23 05:38:56.665: INFO: rs-e2e-pts-filter-pnssj from sched-pred-3272 started at 2021-10-23 05:38:52 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 05:38:56.665: INFO: rs-e2e-pts-filter-tz6t5 from sched-pred-3272 started at 2021-10-23 05:38:52 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 23 05:38:56.665: INFO: high from sched-preemption-1553 started at 2021-10-23 05:38:14 +0000 UTC (1 container statuses recorded) Oct 23 05:38:56.665: INFO: Container high ready: false, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-89ba6597-18da-42bf-bc7b-1b83222a4db8 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-89ba6597-18da-42bf-bc7b-1b83222a4db8 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-89ba6597-18da-42bf-bc7b-1b83222a4db8 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:39:06.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8270" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.136 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":11,"skipped":4652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:39:06.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 23 05:39:06.780: INFO: Waiting up to 1m0s for all nodes to be ready Oct 23 05:40:06.834: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:40:06.836: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 05:40:06.856: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 05:40:06.856: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 05:40:06.856: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 05:40:06.856: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 05:40:06.872: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:40:06.872: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:40:06.872: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.872: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:40:06.872: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Oct 23 05:40:06.887: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:40:06.887: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 23 05:40:06.887: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-gltgg, Cpu: 100, Mem: 209715200 Oct 23 05:40:06.887: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 23 05:40:06.887: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 23 05:40:06.903: INFO: Waiting for running... Oct 23 05:40:06.904: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:40:11.979: INFO: ComputeCPUMemFraction for node: node1 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 05:40:11.979: INFO: Node: node1, totalRequestedMemResource: 1340355481600, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 23 05:40:11.979: INFO: ComputeCPUMemFraction for node: node2 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.979: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Pod for on the node: c774c5c5-8839-4ab3-8117-796280571985-0, Cpu: 38400, Mem: 89350041600 Oct 23 05:40:11.980: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 23 05:40:11.980: INFO: Node: node2, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6748f6db-7f24-45e4-9b2c=testing-taint-value-a6edbf5b-8e9b-415b-a221-79700d460c3d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3634321a-6922-46a5-a900=testing-taint-value-2a32e56b-9a32-4591-bfe3-a05ea9de2792:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4a135a15-cfd1-45db-9525=testing-taint-value-1d8eb432-07dd-4d81-b254-2d9861949e32:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5a16e78b-8f41-417c-a30b=testing-taint-value-49dcf830-6edc-4e7c-91bc-69e0a68ae993:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b966f7da-8a80-47bf-b456=testing-taint-value-ecf9daf5-b614-4f41-be9e-9a02db0441e8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6cb88b05-9b66-4cd4-9972=testing-taint-value-b47dc222-cdea-49f4-b9d6-9df8e2c5a19e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f2535672-4b3e-461f-91dc=testing-taint-value-b42b06d8-d9c3-4e34-b956-a6a47b69bacf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b5238fb3-f850-4acb-bcac=testing-taint-value-bb1da10c-3990-451a-ab5f-23048b060644:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-09c601a0-54bc-4315-97a4=testing-taint-value-dad72e0e-abf6-40f6-897d-dddb18c23c0a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-64bf0491-e779-4bc7-81bf=testing-taint-value-fe5f8656-4ca6-41da-94a2-914ed7c6e312:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bce468e5-2d55-4923-b4d5=testing-taint-value-d066a75d-400d-4d19-9f38-bdf92749e8bd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cf495404-c236-4cad-a188=testing-taint-value-31dded05-424c-47f9-ba36-f6743f11c2ea:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-725986f9-85e2-4949-aa14=testing-taint-value-b2f2691e-9582-4e5e-9f97-c16d35189f7a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8aa8fac4-43f4-4a6d-90eb=testing-taint-value-8a0e90ca-3e60-43c7-9532-0a83fe8f30b8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c4ed18a4-5b77-4949-9aaf=testing-taint-value-f3d3a8a7-a031-427f-8df9-35b52d62e221:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-abfd9b57-bcf2-47a3-bff4=testing-taint-value-93c564d0-be36-4af7-8968-cd48e019f312:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-594b0eba-c9e4-4d32-9768=testing-taint-value-7aa9e599-3889-419c-be78-7ed8416c3807:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-258a5d78-a034-45c0-94cf=testing-taint-value-9eb562b1-a368-4332-a9c5-955cf52be015:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-06c8cc23-ae99-4fce-8920=testing-taint-value-e0396430-6187-4d44-8af3-fcb1ae1c3fb7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d0d1ef2f-1150-4497-8698=testing-taint-value-89931f77-7364-4a38-bba3-e2b388e82c91:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bce468e5-2d55-4923-b4d5=testing-taint-value-d066a75d-400d-4d19-9f38-bdf92749e8bd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cf495404-c236-4cad-a188=testing-taint-value-31dded05-424c-47f9-ba36-f6743f11c2ea:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-725986f9-85e2-4949-aa14=testing-taint-value-b2f2691e-9582-4e5e-9f97-c16d35189f7a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8aa8fac4-43f4-4a6d-90eb=testing-taint-value-8a0e90ca-3e60-43c7-9532-0a83fe8f30b8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c4ed18a4-5b77-4949-9aaf=testing-taint-value-f3d3a8a7-a031-427f-8df9-35b52d62e221:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-abfd9b57-bcf2-47a3-bff4=testing-taint-value-93c564d0-be36-4af7-8968-cd48e019f312:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-594b0eba-c9e4-4d32-9768=testing-taint-value-7aa9e599-3889-419c-be78-7ed8416c3807:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-258a5d78-a034-45c0-94cf=testing-taint-value-9eb562b1-a368-4332-a9c5-955cf52be015:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-06c8cc23-ae99-4fce-8920=testing-taint-value-e0396430-6187-4d44-8af3-fcb1ae1c3fb7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d0d1ef2f-1150-4497-8698=testing-taint-value-89931f77-7364-4a38-bba3-e2b388e82c91:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6748f6db-7f24-45e4-9b2c=testing-taint-value-a6edbf5b-8e9b-415b-a221-79700d460c3d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3634321a-6922-46a5-a900=testing-taint-value-2a32e56b-9a32-4591-bfe3-a05ea9de2792:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4a135a15-cfd1-45db-9525=testing-taint-value-1d8eb432-07dd-4d81-b254-2d9861949e32:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5a16e78b-8f41-417c-a30b=testing-taint-value-49dcf830-6edc-4e7c-91bc-69e0a68ae993:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b966f7da-8a80-47bf-b456=testing-taint-value-ecf9daf5-b614-4f41-be9e-9a02db0441e8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6cb88b05-9b66-4cd4-9972=testing-taint-value-b47dc222-cdea-49f4-b9d6-9df8e2c5a19e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f2535672-4b3e-461f-91dc=testing-taint-value-b42b06d8-d9c3-4e34-b956-a6a47b69bacf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b5238fb3-f850-4acb-bcac=testing-taint-value-bb1da10c-3990-451a-ab5f-23048b060644:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-09c601a0-54bc-4315-97a4=testing-taint-value-dad72e0e-abf6-40f6-897d-dddb18c23c0a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-64bf0491-e779-4bc7-81bf=testing-taint-value-fe5f8656-4ca6-41da-94a2-914ed7c6e312:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:40:25.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7475" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:78.576 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":12,"skipped":5010,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 05:40:25.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 23 05:40:25.366: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 23 05:40:25.374: INFO: Waiting for terminating namespaces to be deleted... Oct 23 05:40:25.376: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 23 05:40:25.388: INFO: cmk-init-discover-node1-c599w from kube-system started at 2021-10-22 21:17:43 +0000 UTC (3 container statuses recorded) Oct 23 05:40:25.388: INFO: Container discover ready: false, restart count 0 Oct 23 05:40:25.388: INFO: Container init ready: false, restart count 0 Oct 23 05:40:25.388: INFO: Container install ready: false, restart count 0 Oct 23 05:40:25.388: INFO: cmk-t9r2t from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:40:25.388: INFO: Container nodereport ready: true, restart count 0 Oct 23 05:40:25.388: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:40:25.388: INFO: kube-flannel-2cdvd from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.388: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 05:40:25.388: INFO: kube-multus-ds-amd64-l97s4 from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.388: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:40:25.388: INFO: kube-proxy-m9z8s from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.388: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:40:25.388: INFO: kubernetes-dashboard-785dcbb76d-kc4kh from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.388: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 05:40:25.388: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n from kube-system started at 2021-10-22 21:07:01 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.389: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 05:40:25.389: INFO: nginx-proxy-node1 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.389: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:40:25.389: INFO: node-feature-discovery-worker-2pvq5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.389: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:40:25.389: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.389: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:40:25.389: INFO: collectd-n9sbv from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:40:25.389: INFO: Container collectd ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:40:25.389: INFO: node-exporter-v656r from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:40:25.389: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:40:25.389: INFO: prometheus-k8s-0 from monitoring started at 2021-10-22 21:19:48 +0000 UTC (4 container statuses recorded) Oct 23 05:40:25.389: INFO: Container config-reloader ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container grafana ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container prometheus ready: true, restart count 1 Oct 23 05:40:25.389: INFO: prometheus-operator-585ccfb458-hwjk2 from monitoring started at 2021-10-22 21:19:21 +0000 UTC (2 container statuses recorded) Oct 23 05:40:25.389: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 05:40:25.389: INFO: with-tolerations from sched-priority-7475 started at 2021-10-23 05:40:12 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.389: INFO: Container with-tolerations ready: true, restart count 0 Oct 23 05:40:25.389: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 23 05:40:25.403: INFO: cmk-init-discover-node2-2btnq from kube-system started at 2021-10-22 21:18:03 +0000 UTC (3 container statuses recorded) Oct 23 05:40:25.403: INFO: Container discover ready: false, restart count 0 Oct 23 05:40:25.403: INFO: Container init ready: false, restart count 0 Oct 23 05:40:25.403: INFO: Container install ready: false, restart count 0 Oct 23 05:40:25.403: INFO: cmk-kn29k from kube-system started at 2021-10-22 21:18:25 +0000 UTC (2 container statuses recorded) Oct 23 05:40:25.403: INFO: Container nodereport ready: true, restart count 1 Oct 23 05:40:25.403: INFO: Container reconcile ready: true, restart count 0 Oct 23 05:40:25.403: INFO: cmk-webhook-6c9d5f8578-pkwhc from kube-system started at 2021-10-22 21:18:26 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 05:40:25.403: INFO: kube-flannel-xx6ls from kube-system started at 2021-10-22 21:06:21 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 05:40:25.403: INFO: kube-multus-ds-amd64-fww5b from kube-system started at 2021-10-22 21:06:30 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container kube-multus ready: true, restart count 1 Oct 23 05:40:25.403: INFO: kube-proxy-5h2bl from kube-system started at 2021-10-22 21:05:27 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 05:40:25.403: INFO: nginx-proxy-node2 from kube-system started at 2021-10-22 21:05:23 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 05:40:25.403: INFO: node-feature-discovery-worker-8k8m5 from kube-system started at 2021-10-22 21:14:11 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 05:40:25.403: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq from kube-system started at 2021-10-22 21:15:26 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 05:40:25.403: INFO: collectd-xhdgw from monitoring started at 2021-10-22 21:23:20 +0000 UTC (3 container statuses recorded) Oct 23 05:40:25.403: INFO: Container collectd ready: true, restart count 0 Oct 23 05:40:25.403: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 05:40:25.403: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 05:40:25.403: INFO: node-exporter-fjc79 from monitoring started at 2021-10-22 21:19:28 +0000 UTC (2 container statuses recorded) Oct 23 05:40:25.403: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 05:40:25.403: INFO: Container node-exporter ready: true, restart count 0 Oct 23 05:40:25.403: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg from monitoring started at 2021-10-22 21:22:32 +0000 UTC (1 container statuses recorded) Oct 23 05:40:25.403: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-dce66cec-0191-4dcf-b88e-385389ae8faf=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-4df6c1e6-f140-4836-bded-763a0877daef testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b0926494555de4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b09264ef1a03e5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b09264ffc97b6f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 279.928162ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b0926506add5d9], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b092650e4bb341], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b0926583c9369f], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b0926586164159], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-dce66cec-0191-4dcf-b88e-385389ae8faf: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b0926586164159], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-dce66cec-0191-4dcf-b88e-385389ae8faf: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b0926494555de4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b09264ef1a03e5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b09264ffc97b6f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 279.928162ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b0926506add5d9], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b092650e4bb341], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b0926583c9369f], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-dce66cec-0191-4dcf-b88e-385389ae8faf=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b09265c73709c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9679/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-4df6c1e6-f140-4836-bded-763a0877daef off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-4df6c1e6-f140-4836-bded-763a0877daef STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-dce66cec-0191-4dcf-b88e-385389ae8faf=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:40:31.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9679" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.192 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":13,"skipped":5386,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 23 05:40:31.533: INFO: Running AfterSuite actions on all nodes Oct 23 05:40:31.533: INFO: Running AfterSuite actions on node 1 Oct 23 05:40:31.533: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 524.780 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 8m46.125462521s Test Suite Passed