I0520 23:55:06.797771 23 e2e.go:129] Starting e2e run "363d4d21-5d8e-439d-b347-44755632b216" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1653090905 - Will randomize all specs Will run 13 of 5773 specs May 20 23:55:06.813: INFO: >>> kubeConfig: /root/.kube/config May 20 23:55:06.818: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 23:55:06.847: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 23:55:06.911: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 23:55:06.911: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 23:55:06.911: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 23:55:06.911: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 23:55:06.911: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 23:55:06.929: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 20 23:55:06.929: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 20 23:55:06.929: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 20 23:55:06.929: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 20 23:55:06.929: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 20 23:55:06.929: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 20 23:55:06.929: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 20 23:55:06.929: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 23:55:06.929: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 20 23:55:06.929: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 20 23:55:06.929: INFO: e2e test version: v1.21.9 May 20 23:55:06.930: INFO: kube-apiserver version: v1.21.1 May 20 23:55:06.930: INFO: >>> kubeConfig: /root/.kube/config May 20 23:55:06.936: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:55:06.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0520 23:55:06.997688 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:55:06.997: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:55:07.000: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:55:07.002: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:55:07.011: INFO: Waiting for terminating namespaces to be deleted... May 20 23:55:07.013: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 23:55:07.022: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 23:55:07.022: INFO: Container nodereport ready: true, restart count 0 May 20 23:55:07.022: INFO: Container reconcile ready: true, restart count 0 May 20 23:55:07.022: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 23:55:07.022: INFO: Container discover ready: false, restart count 0 May 20 23:55:07.022: INFO: Container init ready: false, restart count 0 May 20 23:55:07.022: INFO: Container install ready: false, restart count 0 May 20 23:55:07.022: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container kube-flannel ready: true, restart count 3 May 20 23:55:07.022: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container kube-multus ready: true, restart count 1 May 20 23:55:07.022: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:55:07.022: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 23:55:07.022: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:55:07.022: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:55:07.022: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:55:07.022: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:55:07.022: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:55:07.022: INFO: Container collectd ready: true, restart count 0 May 20 23:55:07.022: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:55:07.022: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:55:07.022: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:55:07.022: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:55:07.022: INFO: Container node-exporter ready: true, restart count 0 May 20 23:55:07.022: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 23:55:07.022: INFO: Container config-reloader ready: true, restart count 0 May 20 23:55:07.022: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 23:55:07.022: INFO: Container grafana ready: true, restart count 0 May 20 23:55:07.022: INFO: Container prometheus ready: true, restart count 1 May 20 23:55:07.022: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 23:55:07.033: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 23:55:07.033: INFO: Container nodereport ready: true, restart count 0 May 20 23:55:07.033: INFO: Container reconcile ready: true, restart count 0 May 20 23:55:07.033: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 23:55:07.033: INFO: Container discover ready: false, restart count 0 May 20 23:55:07.033: INFO: Container init ready: false, restart count 0 May 20 23:55:07.034: INFO: Container install ready: false, restart count 0 May 20 23:55:07.034: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container cmk-webhook ready: true, restart count 0 May 20 23:55:07.034: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container kube-flannel ready: true, restart count 2 May 20 23:55:07.034: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container kube-multus ready: true, restart count 1 May 20 23:55:07.034: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:55:07.034: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 23:55:07.034: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:55:07.034: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:55:07.034: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:55:07.034: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:55:07.034: INFO: Container collectd ready: true, restart count 0 May 20 23:55:07.034: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:55:07.034: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:55:07.034: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:55:07.034: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:55:07.034: INFO: Container node-exporter ready: true, restart count 0 May 20 23:55:07.034: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 23:55:07.034: INFO: Container tas-extender ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 20 23:55:07.069: INFO: Pod cmk-9hxtl requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod cmk-c5x47 requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod cmk-webhook-6c9d5f8578-5kbbc requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod kube-flannel-2blt7 requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod kube-flannel-jpmpd requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod kube-multus-ds-amd64-krd6m requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod kube-multus-ds-amd64-p22zp requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod kube-proxy-rg2fp requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod kube-proxy-v8kzq requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod kubernetes-dashboard-785dcbb76d-6c2f8 requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod kubernetes-metrics-scraper-5558854cb-66r9g requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod node-feature-discovery-worker-nphk9 requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod node-feature-discovery-worker-rh55h requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod collectd-875j8 requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod collectd-h4pzk requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod node-exporter-czwvh requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod node-exporter-vm24n requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 20 23:55:07.069: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-ddzzd requesting local ephemeral resource =0 on Node node2 May 20 23:55:07.069: INFO: Using pod capacity: 40608090249 May 20 23:55:07.069: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 May 20 23:55:07.069: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 20 23:55:07.263: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f0f56bea14b23b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f0f56ca830c82f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f0f56ced276732], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.157003061s] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f0f56d32aef288], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f0f56d5bc096e7], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f0f56bea94f8cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f0f56c9b7fa558], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f0f56cb21b283f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 379.284496ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f0f56cd62285bf], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f0f56d30e1530a], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f0f56befb08a37], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f0f56e02e86e4f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f0f56e184a6389], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 358.734924ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f0f56e1f114558], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f0f56e26121959], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f0f56bf03a031d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f0f56d4f29b85c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f0f56d716b972f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 574.735126ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f0f56d817a8e17], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f0f56dcc8d7d61], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f0f56bf0c4d32e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f0f56e06d0d157], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f0f56e5b9a85b7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.422497798s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f0f56e618262fc], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f0f56e6888e50d], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f0f56bf170d3e5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f0f56d75d3a215], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f0f56d8d409de0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 393.010086ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f0f56db80ed172], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f0f56df8dec7ba], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f0f56bf1f9814f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f0f56dc61babeb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f0f56de10dd820], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 452.070915ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f0f56e06a87ad3], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f0f56e0e8c58f0], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f0f56bf28ac3ea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f0f56daf65437e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f0f56e2653534d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.995307376s] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f0f56e2de72434], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f0f56e34747d1b], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f0f56bf315967a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f0f56dcdf2d4c5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f0f56e3b414438], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.833851303s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f0f56e423efbc3], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f0f56e489acb69], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f0f56bf3aea7a8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f0f56daab19fcc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f0f56dbf068e25], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 341.101825ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f0f56dd501f31f], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f0f56e11e378f9], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f0f56bf441971b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f0f56dd0d621db], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f0f56e50af1da1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 2.144915762s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f0f56e5812d255], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f0f56e5f480377], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f0f56bf4e5389d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-19 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f0f56e068c1c72], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f0f56e45bbf50e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.060092595s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f0f56e4bf78f30], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f0f56e52ba26c6], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f0f56beb12aba0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f0f56d36912f09], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f0f56d4b2891d1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 345.45693ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f0f56d609dfed0], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f0f56dac9a36e4], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f0f56beb9845e3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f0f56cacdf3900], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f0f56cdb674005], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 780.657677ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f0f56d1675ce21], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f0f56d3f5437c3], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f0f56bec2f7819], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f0f56daad0ff84], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f0f56e11584b56], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.7201353s] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f0f56e18c4bd48], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f0f56e2044dfb0], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f0f56becb1f018], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f0f56c9e940344], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f0f56cbd441769], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 514.846508ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f0f56cefc1470d], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f0f56d3b76db00], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f0f56bed4c0da2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-6 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f0f56d53138f96], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f0f56d813a768d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 774.295301ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f0f56d987a0d40], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f0f56dd26e3dda], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f0f56bede8dcb8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-7 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f0f56d519397eb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f0f56d6712cbaf], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 360.65057ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f0f56d8a555af1], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f0f56dd0057aff], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f0f56bee7433fa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f0f56d35592dea], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f0f56d54751b79], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 521.908619ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f0f56d74ee8f16], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f0f56db9e35231], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f0f56bef11d3e4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2552/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f0f56e035adfa1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f0f56e2ec2dc4c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 728.228976ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f0f56e36fd9e74], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f0f56e410b2c7a], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f0f56f772b9330], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:55:23.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2552" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.391 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":1,"skipped":2594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:55:23.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 20 23:55:23.388: INFO: Waiting up to 1m0s for all nodes to be ready May 20 23:56:23.448: INFO: Waiting for terminating namespaces to be deleted... May 20 23:56:23.450: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 23:56:23.468: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 23:56:23.468: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 23:56:23.468: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 23:56:23.468: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 23:56:23.483: INFO: ComputeCPUMemFraction for node: node1 May 20 23:56:23.483: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 20 23:56:23.483: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 20 23:56:23.483: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 20 23:56:23.483: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 20 23:56:23.483: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 20 23:56:23.483: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 20 23:56:23.483: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 20 23:56:23.483: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 20 23:56:23.483: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 20 23:56:23.483: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 20 23:56:23.483: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 20 23:56:23.483: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 20 23:56:23.483: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 20 23:56:23.483: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 20 23:56:23.483: INFO: ComputeCPUMemFraction for node: node2 May 20 23:56:23.483: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 20 23:56:23.483: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 20 23:56:23.483: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 20 23:56:23.483: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 20 23:56:23.484: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 20 23:56:23.484: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 20 23:56:23.484: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 20 23:56:23.484: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 20 23:56:23.484: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 20 23:56:23.484: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 20 23:56:23.484: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 20 23:56:23.484: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 20 23:56:23.484: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 20 23:56:23.484: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 20 23:56:23.484: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 20 23:56:27.529: INFO: ComputeCPUMemFraction for node: node1 May 20 23:56:27.529: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 20 23:56:27.529: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 20 23:56:27.529: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 20 23:56:27.529: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 20 23:56:27.529: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 20 23:56:27.529: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 20 23:56:27.529: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 20 23:56:27.529: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 20 23:56:27.529: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 20 23:56:27.529: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 20 23:56:27.529: INFO: ComputeCPUMemFraction for node: node2 May 20 23:56:27.529: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 20 23:56:27.529: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 20 23:56:27.529: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 20 23:56:27.529: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 20 23:56:27.529: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 20 23:56:27.529: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 20 23:56:27.529: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 23:56:27.529: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 20 23:56:27.529: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 May 20 23:56:27.539: INFO: Waiting for running... May 20 23:56:27.544: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 23:56:32.619: INFO: ComputeCPUMemFraction for node: node1 May 20 23:56:32.619: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 20 23:56:32.619: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 20 23:56:32.619: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 20 23:56:32.619: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 20 23:56:32.619: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 20 23:56:32.619: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 20 23:56:32.619: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 20 23:56:32.619: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 20 23:56:32.619: INFO: Pod for on the node: 5a2d46e0-1f68-47d4-995f-9f4d03a275e4-0, Cpu: 45263, Mem: 105568540672 May 20 23:56:32.619: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 20 23:56:32.619: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 23:56:32.619: INFO: ComputeCPUMemFraction for node: node2 May 20 23:56:32.619: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 20 23:56:32.619: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 20 23:56:32.619: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 20 23:56:32.619: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 20 23:56:32.619: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 20 23:56:32.619: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 20 23:56:32.619: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Pod for on the node: 70631090-eafc-4763-a892-2f0b9a25b4b2-0, Cpu: 45713, Mem: 106838403072 May 20 23:56:32.619: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 20 23:56:32.619: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 20 23:56:32.619: INFO: Node: node2, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:56:46.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7968" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:83.310 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":2,"skipped":2744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:56:46.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:56:46.698: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:56:46.706: INFO: Waiting for terminating namespaces to be deleted... May 20 23:56:46.709: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 23:56:46.719: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 23:56:46.719: INFO: Container nodereport ready: true, restart count 0 May 20 23:56:46.719: INFO: Container reconcile ready: true, restart count 0 May 20 23:56:46.719: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 23:56:46.719: INFO: Container discover ready: false, restart count 0 May 20 23:56:46.719: INFO: Container init ready: false, restart count 0 May 20 23:56:46.719: INFO: Container install ready: false, restart count 0 May 20 23:56:46.719: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container kube-flannel ready: true, restart count 3 May 20 23:56:46.719: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container kube-multus ready: true, restart count 1 May 20 23:56:46.719: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:56:46.719: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 23:56:46.719: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:56:46.719: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:56:46.719: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:56:46.719: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:56:46.719: INFO: Container collectd ready: true, restart count 0 May 20 23:56:46.719: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:56:46.719: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:56:46.719: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:56:46.719: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:56:46.719: INFO: Container node-exporter ready: true, restart count 0 May 20 23:56:46.719: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 23:56:46.719: INFO: Container config-reloader ready: true, restart count 0 May 20 23:56:46.719: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 23:56:46.719: INFO: Container grafana ready: true, restart count 0 May 20 23:56:46.719: INFO: Container prometheus ready: true, restart count 1 May 20 23:56:46.719: INFO: pod-with-pod-antiaffinity from sched-priority-7968 started at 2022-05-20 23:56:32 +0000 UTC (1 container statuses recorded) May 20 23:56:46.719: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 May 20 23:56:46.719: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 23:56:46.728: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 23:56:46.728: INFO: Container nodereport ready: true, restart count 0 May 20 23:56:46.728: INFO: Container reconcile ready: true, restart count 0 May 20 23:56:46.728: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 23:56:46.728: INFO: Container discover ready: false, restart count 0 May 20 23:56:46.728: INFO: Container init ready: false, restart count 0 May 20 23:56:46.728: INFO: Container install ready: false, restart count 0 May 20 23:56:46.728: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container cmk-webhook ready: true, restart count 0 May 20 23:56:46.728: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container kube-flannel ready: true, restart count 2 May 20 23:56:46.728: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container kube-multus ready: true, restart count 1 May 20 23:56:46.728: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:56:46.728: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 23:56:46.728: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:56:46.728: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:56:46.728: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:56:46.728: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:56:46.728: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:56:46.729: INFO: Container collectd ready: true, restart count 0 May 20 23:56:46.729: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:56:46.729: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:56:46.729: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:56:46.729: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:56:46.729: INFO: Container node-exporter ready: true, restart count 0 May 20 23:56:46.729: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 23:56:46.729: INFO: Container tas-extender ready: true, restart count 0 May 20 23:56:46.729: INFO: pod-with-label-security-s1 from sched-priority-7968 started at 2022-05-20 23:56:23 +0000 UTC (1 container statuses recorded) May 20 23:56:46.729: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-569b5e88-d5df-4775-88fc-3e404681e886=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-bf395748-039a-483a-936c-1c395929d050 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-bf395748-039a-483a-936c-1c395929d050 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-bf395748-039a-483a-936c-1c395929d050 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-569b5e88-d5df-4775-88fc-3e404681e886=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:56:54.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3102" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.169 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":3,"skipped":2968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:56:54.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:56:54.867: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:56:54.875: INFO: Waiting for terminating namespaces to be deleted... May 20 23:56:54.878: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 23:56:54.889: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 23:56:54.889: INFO: Container nodereport ready: true, restart count 0 May 20 23:56:54.889: INFO: Container reconcile ready: true, restart count 0 May 20 23:56:54.889: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 23:56:54.889: INFO: Container discover ready: false, restart count 0 May 20 23:56:54.889: INFO: Container init ready: false, restart count 0 May 20 23:56:54.889: INFO: Container install ready: false, restart count 0 May 20 23:56:54.889: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container kube-flannel ready: true, restart count 3 May 20 23:56:54.889: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container kube-multus ready: true, restart count 1 May 20 23:56:54.889: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:56:54.889: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 23:56:54.889: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:56:54.889: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:56:54.889: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:56:54.889: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:56:54.889: INFO: Container collectd ready: true, restart count 0 May 20 23:56:54.889: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:56:54.889: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:56:54.889: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:56:54.889: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:56:54.889: INFO: Container node-exporter ready: true, restart count 0 May 20 23:56:54.889: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 23:56:54.889: INFO: Container config-reloader ready: true, restart count 0 May 20 23:56:54.889: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 23:56:54.889: INFO: Container grafana ready: true, restart count 0 May 20 23:56:54.889: INFO: Container prometheus ready: true, restart count 1 May 20 23:56:54.889: INFO: with-tolerations from sched-pred-3102 started at 2022-05-20 23:56:50 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container with-tolerations ready: true, restart count 0 May 20 23:56:54.889: INFO: pod-with-pod-antiaffinity from sched-priority-7968 started at 2022-05-20 23:56:32 +0000 UTC (1 container statuses recorded) May 20 23:56:54.889: INFO: Container pod-with-pod-antiaffinity ready: false, restart count 0 May 20 23:56:54.889: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 23:56:54.904: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 23:56:54.904: INFO: Container nodereport ready: true, restart count 0 May 20 23:56:54.904: INFO: Container reconcile ready: true, restart count 0 May 20 23:56:54.904: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 23:56:54.904: INFO: Container discover ready: false, restart count 0 May 20 23:56:54.904: INFO: Container init ready: false, restart count 0 May 20 23:56:54.904: INFO: Container install ready: false, restart count 0 May 20 23:56:54.904: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container cmk-webhook ready: true, restart count 0 May 20 23:56:54.904: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container kube-flannel ready: true, restart count 2 May 20 23:56:54.904: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container kube-multus ready: true, restart count 1 May 20 23:56:54.904: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:56:54.904: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 23:56:54.904: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:56:54.904: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:56:54.904: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:56:54.904: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:56:54.904: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:56:54.905: INFO: Container collectd ready: true, restart count 0 May 20 23:56:54.905: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:56:54.905: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:56:54.905: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:56:54.905: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:56:54.905: INFO: Container node-exporter ready: true, restart count 0 May 20 23:56:54.905: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 23:56:54.905: INFO: Container tas-extender ready: true, restart count 0 May 20 23:56:54.905: INFO: pod-with-label-security-s1 from sched-priority-7968 started at 2022-05-20 23:56:23 +0000 UTC (1 container statuses recorded) May 20 23:56:54.905: INFO: Container pod-with-label-security-s1 ready: false, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:57:11.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9752" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.191 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":4,"skipped":3015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:57:11.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 20 23:57:11.062: INFO: Waiting up to 1m0s for all nodes to be ready May 20 23:58:11.116: INFO: Waiting for terminating namespaces to be deleted... May 20 23:58:11.118: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 23:58:11.137: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 23:58:11.137: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 23:58:11.137: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 23:58:11.137: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 23:58:11.154: INFO: ComputeCPUMemFraction for node: node1 May 20 23:58:11.154: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 20 23:58:11.154: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 20 23:58:11.154: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 20 23:58:11.154: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 20 23:58:11.154: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 20 23:58:11.154: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 20 23:58:11.155: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 20 23:58:11.155: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 20 23:58:11.155: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 20 23:58:11.155: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 20 23:58:11.155: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 20 23:58:11.155: INFO: ComputeCPUMemFraction for node: node2 May 20 23:58:11.155: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 20 23:58:11.155: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 20 23:58:11.155: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 20 23:58:11.155: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 20 23:58:11.155: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 20 23:58:11.155: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 20 23:58:11.155: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 20 23:58:11.155: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 20 23:58:11.155: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 20 23:58:11.155: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 May 20 23:58:21.251: INFO: ComputeCPUMemFraction for node: node2 May 20 23:58:21.251: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 20 23:58:21.251: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 20 23:58:21.252: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 20 23:58:21.252: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 20 23:58:21.252: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 20 23:58:21.252: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 20 23:58:21.252: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 20 23:58:21.252: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 May 20 23:58:21.252: INFO: ComputeCPUMemFraction for node: node1 May 20 23:58:21.252: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 20 23:58:21.252: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 20 23:58:21.252: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 20 23:58:21.252: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 20 23:58:21.252: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 20 23:58:21.252: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 20 23:58:21.252: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 20 23:58:21.252: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 20 23:58:21.252: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 20 23:58:21.252: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 20 23:58:21.252: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 20 23:58:21.263: INFO: Waiting for running... May 20 23:58:21.266: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 23:58:26.338: INFO: ComputeCPUMemFraction for node: node2 May 20 23:58:26.338: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 20 23:58:26.338: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 20 23:58:26.338: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 20 23:58:26.338: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 20 23:58:26.338: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 20 23:58:26.338: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 20 23:58:26.338: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: 6c33b2d1-b8a7-4181-8259-956f91f02e01-0, Cpu: 38013, Mem: 88949942272 May 20 23:58:26.338: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 20 23:58:26.338: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 20 23:58:26.338: INFO: ComputeCPUMemFraction for node: node1 May 20 23:58:26.338: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 20 23:58:26.338: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 20 23:58:26.338: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 20 23:58:26.338: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 20 23:58:26.338: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 20 23:58:26.338: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 20 23:58:26.338: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 20 23:58:26.338: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 20 23:58:26.338: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 20 23:58:26.338: INFO: Pod for on the node: f8e53e15-ec33-4510-ad77-fd08f79f23bd-0, Cpu: 37563, Mem: 87680079872 May 20 23:58:26.338: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 20 23:58:26.338: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:58:48.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1623" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:97.390 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":5,"skipped":3065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:58:48.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:58:48.461: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:58:48.471: INFO: Waiting for terminating namespaces to be deleted... May 20 23:58:48.474: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 23:58:48.484: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 23:58:48.484: INFO: Container nodereport ready: true, restart count 0 May 20 23:58:48.484: INFO: Container reconcile ready: true, restart count 0 May 20 23:58:48.484: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 23:58:48.484: INFO: Container discover ready: false, restart count 0 May 20 23:58:48.484: INFO: Container init ready: false, restart count 0 May 20 23:58:48.484: INFO: Container install ready: false, restart count 0 May 20 23:58:48.484: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container kube-flannel ready: true, restart count 3 May 20 23:58:48.484: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container kube-multus ready: true, restart count 1 May 20 23:58:48.484: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:58:48.484: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 23:58:48.484: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:58:48.484: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:58:48.484: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:58:48.484: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:58:48.484: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:58:48.484: INFO: Container collectd ready: true, restart count 0 May 20 23:58:48.484: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:58:48.484: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:58:48.485: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:58:48.485: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:58:48.485: INFO: Container node-exporter ready: true, restart count 0 May 20 23:58:48.485: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 23:58:48.485: INFO: Container config-reloader ready: true, restart count 0 May 20 23:58:48.485: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 23:58:48.485: INFO: Container grafana ready: true, restart count 0 May 20 23:58:48.485: INFO: Container prometheus ready: true, restart count 1 May 20 23:58:48.485: INFO: test-pod from sched-priority-1623 started at 2022-05-20 23:58:32 +0000 UTC (1 container statuses recorded) May 20 23:58:48.485: INFO: Container test-pod ready: true, restart count 0 May 20 23:58:48.485: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 23:58:48.493: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 23:58:48.493: INFO: Container nodereport ready: true, restart count 0 May 20 23:58:48.493: INFO: Container reconcile ready: true, restart count 0 May 20 23:58:48.493: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 23:58:48.493: INFO: Container discover ready: false, restart count 0 May 20 23:58:48.493: INFO: Container init ready: false, restart count 0 May 20 23:58:48.493: INFO: Container install ready: false, restart count 0 May 20 23:58:48.493: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container cmk-webhook ready: true, restart count 0 May 20 23:58:48.493: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container kube-flannel ready: true, restart count 2 May 20 23:58:48.493: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container kube-multus ready: true, restart count 1 May 20 23:58:48.493: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:58:48.493: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 23:58:48.493: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:58:48.493: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:58:48.493: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:58:48.493: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:58:48.493: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:58:48.493: INFO: Container collectd ready: true, restart count 0 May 20 23:58:48.493: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:58:48.493: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:58:48.494: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:58:48.494: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:58:48.494: INFO: Container node-exporter ready: true, restart count 0 May 20 23:58:48.494: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 23:58:48.494: INFO: Container tas-extender ready: true, restart count 0 May 20 23:58:48.494: INFO: rs-e2e-pts-score-2c7b2 from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:48.494: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 23:58:48.494: INFO: rs-e2e-pts-score-lw6sl from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:48.494: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 23:58:48.494: INFO: rs-e2e-pts-score-v7lxb from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:48.494: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 23:58:48.494: INFO: rs-e2e-pts-score-wbw9m from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:48.494: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d82edf9a-0415-434f-9b50-6cebf8b14bfe=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-2488c5cd-1af4-47c3-8b0a-ea313a2928b4 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59f77fa1c1e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4995/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59fd18bf467], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59fe5c2aa5e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 339.121463ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59febf06b92], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59ff3b75942], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f5a067f7b740], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f0f5a069b4ba99], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d82edf9a-0415-434f-9b50-6cebf8b14bfe: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f0f5a069b4ba99], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-d82edf9a-0415-434f-9b50-6cebf8b14bfe: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59f77fa1c1e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4995/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59fd18bf467], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59fe5c2aa5e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 339.121463ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59febf06b92], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f59ff3b75942], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f0f5a067f7b740], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d82edf9a-0415-434f-9b50-6cebf8b14bfe=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16f0f5a0da76305b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4995/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-2488c5cd-1af4-47c3-8b0a-ea313a2928b4 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-2488c5cd-1af4-47c3-8b0a-ea313a2928b4 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d82edf9a-0415-434f-9b50-6cebf8b14bfe=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:58:54.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4995" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":6,"skipped":3315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:58:54.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:58:54.642: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:58:54.651: INFO: Waiting for terminating namespaces to be deleted... May 20 23:58:54.653: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 23:58:54.663: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 23:58:54.663: INFO: Container nodereport ready: true, restart count 0 May 20 23:58:54.663: INFO: Container reconcile ready: true, restart count 0 May 20 23:58:54.663: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 23:58:54.663: INFO: Container discover ready: false, restart count 0 May 20 23:58:54.663: INFO: Container init ready: false, restart count 0 May 20 23:58:54.663: INFO: Container install ready: false, restart count 0 May 20 23:58:54.663: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container kube-flannel ready: true, restart count 3 May 20 23:58:54.663: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container kube-multus ready: true, restart count 1 May 20 23:58:54.663: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:58:54.663: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 23:58:54.663: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:58:54.663: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:58:54.663: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:58:54.663: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:58:54.663: INFO: Container collectd ready: true, restart count 0 May 20 23:58:54.663: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:58:54.663: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:58:54.663: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:58:54.663: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:58:54.663: INFO: Container node-exporter ready: true, restart count 0 May 20 23:58:54.663: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 23:58:54.663: INFO: Container config-reloader ready: true, restart count 0 May 20 23:58:54.663: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 23:58:54.663: INFO: Container grafana ready: true, restart count 0 May 20 23:58:54.663: INFO: Container prometheus ready: true, restart count 1 May 20 23:58:54.663: INFO: test-pod from sched-priority-1623 started at 2022-05-20 23:58:32 +0000 UTC (1 container statuses recorded) May 20 23:58:54.663: INFO: Container test-pod ready: true, restart count 0 May 20 23:58:54.663: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 23:58:54.671: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 23:58:54.671: INFO: Container nodereport ready: true, restart count 0 May 20 23:58:54.671: INFO: Container reconcile ready: true, restart count 0 May 20 23:58:54.671: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 23:58:54.671: INFO: Container discover ready: false, restart count 0 May 20 23:58:54.671: INFO: Container init ready: false, restart count 0 May 20 23:58:54.671: INFO: Container install ready: false, restart count 0 May 20 23:58:54.671: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container cmk-webhook ready: true, restart count 0 May 20 23:58:54.671: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container kube-flannel ready: true, restart count 2 May 20 23:58:54.671: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container kube-multus ready: true, restart count 1 May 20 23:58:54.671: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:58:54.671: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 23:58:54.671: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:58:54.671: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:58:54.671: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:58:54.671: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:58:54.671: INFO: Container collectd ready: true, restart count 0 May 20 23:58:54.671: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:58:54.671: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:58:54.671: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:58:54.671: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:58:54.671: INFO: Container node-exporter ready: true, restart count 0 May 20 23:58:54.671: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container tas-extender ready: true, restart count 0 May 20 23:58:54.671: INFO: still-no-tolerations from sched-pred-4995 started at 2022-05-20 23:58:54 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container still-no-tolerations ready: false, restart count 0 May 20 23:58:54.671: INFO: rs-e2e-pts-score-2c7b2 from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 23:58:54.671: INFO: rs-e2e-pts-score-lw6sl from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 23:58:54.671: INFO: rs-e2e-pts-score-v7lxb from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container e2e-pts-score ready: true, restart count 0 May 20 23:58:54.671: INFO: rs-e2e-pts-score-wbw9m from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:58:54.671: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-565b8722-356f-4b04-bffe-4f5f115a465a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-565b8722-356f-4b04-bffe-4f5f115a465a off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-565b8722-356f-4b04-bffe-4f5f115a465a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:59:02.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5672" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":7,"skipped":3482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:59:02.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 20 23:59:02.795: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 20 23:59:02.803: INFO: Waiting for terminating namespaces to be deleted... May 20 23:59:02.806: INFO: Logging pods the apiserver thinks is on node node1 before test May 20 23:59:02.817: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 20 23:59:02.817: INFO: Container nodereport ready: true, restart count 0 May 20 23:59:02.817: INFO: Container reconcile ready: true, restart count 0 May 20 23:59:02.817: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 20 23:59:02.817: INFO: Container discover ready: false, restart count 0 May 20 23:59:02.817: INFO: Container init ready: false, restart count 0 May 20 23:59:02.817: INFO: Container install ready: false, restart count 0 May 20 23:59:02.817: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container kube-flannel ready: true, restart count 3 May 20 23:59:02.817: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container kube-multus ready: true, restart count 1 May 20 23:59:02.817: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:59:02.817: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 20 23:59:02.817: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:59:02.817: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:59:02.817: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:59:02.817: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:59:02.817: INFO: Container collectd ready: true, restart count 0 May 20 23:59:02.817: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:59:02.817: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:59:02.817: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:59:02.817: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:59:02.817: INFO: Container node-exporter ready: true, restart count 0 May 20 23:59:02.817: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 20 23:59:02.817: INFO: Container config-reloader ready: true, restart count 0 May 20 23:59:02.817: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 20 23:59:02.817: INFO: Container grafana ready: true, restart count 0 May 20 23:59:02.817: INFO: Container prometheus ready: true, restart count 1 May 20 23:59:02.817: INFO: with-labels from sched-pred-5672 started at 2022-05-20 23:58:58 +0000 UTC (1 container statuses recorded) May 20 23:59:02.817: INFO: Container with-labels ready: true, restart count 0 May 20 23:59:02.817: INFO: Logging pods the apiserver thinks is on node node2 before test May 20 23:59:02.827: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 20 23:59:02.827: INFO: Container nodereport ready: true, restart count 0 May 20 23:59:02.827: INFO: Container reconcile ready: true, restart count 0 May 20 23:59:02.827: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 20 23:59:02.827: INFO: Container discover ready: false, restart count 0 May 20 23:59:02.827: INFO: Container init ready: false, restart count 0 May 20 23:59:02.827: INFO: Container install ready: false, restart count 0 May 20 23:59:02.827: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container cmk-webhook ready: true, restart count 0 May 20 23:59:02.827: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container kube-flannel ready: true, restart count 2 May 20 23:59:02.827: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container kube-multus ready: true, restart count 1 May 20 23:59:02.827: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container kube-proxy ready: true, restart count 2 May 20 23:59:02.827: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 20 23:59:02.827: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container nginx-proxy ready: true, restart count 2 May 20 23:59:02.827: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container nfd-worker ready: true, restart count 0 May 20 23:59:02.827: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container kube-sriovdp ready: true, restart count 0 May 20 23:59:02.827: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 20 23:59:02.827: INFO: Container collectd ready: true, restart count 0 May 20 23:59:02.827: INFO: Container collectd-exporter ready: true, restart count 0 May 20 23:59:02.827: INFO: Container rbac-proxy ready: true, restart count 0 May 20 23:59:02.827: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 20 23:59:02.827: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 20 23:59:02.827: INFO: Container node-exporter ready: true, restart count 0 May 20 23:59:02.827: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container tas-extender ready: true, restart count 0 May 20 23:59:02.827: INFO: still-no-tolerations from sched-pred-4995 started at 2022-05-20 23:58:54 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container still-no-tolerations ready: false, restart count 0 May 20 23:59:02.827: INFO: rs-e2e-pts-score-2c7b2 from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 23:59:02.827: INFO: rs-e2e-pts-score-lw6sl from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 23:59:02.827: INFO: rs-e2e-pts-score-v7lxb from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container e2e-pts-score ready: false, restart count 0 May 20 23:59:02.827: INFO: rs-e2e-pts-score-wbw9m from sched-priority-1623 started at 2022-05-20 23:58:26 +0000 UTC (1 container statuses recorded) May 20 23:59:02.827: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9b33e580-88ff-41a8-adb0-4b88b6913efd 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-9b33e580-88ff-41a8-adb0-4b88b6913efd off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9b33e580-88ff-41a8-adb0-4b88b6913efd [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:59:18.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4732" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":8,"skipped":4145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:59:18.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 20 23:59:18.989: INFO: Waiting up to 1m0s for all nodes to be ready May 21 00:00:19.046: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node1. STEP: Apply 10 fake resource to node node2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 21 00:00:59.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3680" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:100.395 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":9,"skipped":4603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 21 00:00:59.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 21 00:00:59.377: INFO: Waiting up to 1m0s for all nodes to be ready May 21 00:01:59.437: INFO: Waiting for terminating namespaces to be deleted... May 21 00:01:59.439: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 00:01:59.459: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 21 00:01:59.459: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 21 00:01:59.459: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 00:01:59.459: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 21 00:01:59.474: INFO: ComputeCPUMemFraction for node: node1 May 21 00:01:59.474: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 21 00:01:59.474: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 21 00:01:59.474: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 21 00:01:59.474: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 21 00:01:59.474: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 21 00:01:59.474: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 21 00:01:59.474: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 21 00:01:59.474: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 21 00:01:59.474: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 21 00:01:59.475: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 21 00:01:59.475: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 21 00:01:59.475: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 21 00:01:59.475: INFO: ComputeCPUMemFraction for node: node2 May 21 00:01:59.475: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 21 00:01:59.475: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 21 00:01:59.475: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 21 00:01:59.475: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 21 00:01:59.475: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 21 00:01:59.475: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 21 00:01:59.475: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 21 00:01:59.475: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 21 00:01:59.475: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 21 00:01:59.475: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 May 21 00:01:59.492: INFO: ComputeCPUMemFraction for node: node1 May 21 00:01:59.492: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 21 00:01:59.492: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 21 00:01:59.493: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 21 00:01:59.493: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 21 00:01:59.493: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 21 00:01:59.493: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 21 00:01:59.493: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 21 00:01:59.493: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 21 00:01:59.493: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 21 00:01:59.493: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 21 00:01:59.493: INFO: ComputeCPUMemFraction for node: node2 May 21 00:01:59.493: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 21 00:01:59.493: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 21 00:01:59.493: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 21 00:01:59.493: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 21 00:01:59.493: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 21 00:01:59.493: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 21 00:01:59.493: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 21 00:01:59.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 21 00:01:59.493: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 21 00:01:59.493: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 May 21 00:01:59.508: INFO: Waiting for running... May 21 00:01:59.512: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 00:02:04.592: INFO: ComputeCPUMemFraction for node: node1 May 21 00:02:04.592: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 21 00:02:04.592: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 21 00:02:04.592: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 21 00:02:04.592: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 21 00:02:04.592: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 21 00:02:04.592: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 21 00:02:04.592: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 21 00:02:04.592: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 21 00:02:04.592: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 21 00:02:04.592: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 21 00:02:04.592: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 21 00:02:04.592: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 21 00:02:04.592: INFO: Pod for on the node: 398b2906-76fb-42ea-92a6-9eafa7126978-0, Cpu: 37563, Mem: 87680079872 May 21 00:02:04.592: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 21 00:02:04.592: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 00:02:04.592: INFO: ComputeCPUMemFraction for node: node2 May 21 00:02:04.592: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 21 00:02:04.592: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 21 00:02:04.592: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 21 00:02:04.592: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 21 00:02:04.592: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 21 00:02:04.592: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 21 00:02:04.592: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 21 00:02:04.593: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 21 00:02:04.593: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 21 00:02:04.593: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 21 00:02:04.593: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 21 00:02:04.593: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 21 00:02:04.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 21 00:02:04.593: INFO: Pod for on the node: 26e9851a-6f30-410f-a57e-8cd22536edb3-0, Cpu: 38013, Mem: 88949942272 May 21 00:02:04.593: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 21 00:02:04.593: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8180 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8180, will wait for the garbage collector to delete the pods May 21 00:02:10.778: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.282323ms May 21 00:02:10.879: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 101.110785ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 21 00:02:36.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8180" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:97.560 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":10,"skipped":4660,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 21 00:02:36.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 21 00:02:36.937: INFO: Waiting up to 1m0s for all nodes to be ready May 21 00:03:36.992: INFO: Waiting for terminating namespaces to be deleted... May 21 00:03:36.995: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 21 00:03:37.014: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 21 00:03:37.014: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 21 00:03:37.014: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 21 00:03:37.014: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 21 00:03:37.029: INFO: ComputeCPUMemFraction for node: node1 May 21 00:03:37.029: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 21 00:03:37.029: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 21 00:03:37.029: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 21 00:03:37.029: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 21 00:03:37.029: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 21 00:03:37.029: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 21 00:03:37.029: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 21 00:03:37.029: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 21 00:03:37.029: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 21 00:03:37.029: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 21 00:03:37.029: INFO: ComputeCPUMemFraction for node: node2 May 21 00:03:37.029: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 21 00:03:37.029: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 21 00:03:37.029: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 21 00:03:37.029: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 21 00:03:37.029: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 21 00:03:37.029: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 21 00:03:37.029: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 21 00:03:37.029: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 21 00:03:37.029: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 21 00:03:37.029: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 May 21 00:03:37.047: INFO: ComputeCPUMemFraction for node: node1 May 21 00:03:37.047: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 21 00:03:37.047: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 21 00:03:37.047: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 21 00:03:37.047: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 21 00:03:37.047: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 21 00:03:37.047: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 21 00:03:37.048: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 21 00:03:37.048: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 21 00:03:37.048: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 21 00:03:37.048: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 21 00:03:37.048: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 21 00:03:37.048: INFO: ComputeCPUMemFraction for node: node2 May 21 00:03:37.048: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 21 00:03:37.048: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 21 00:03:37.048: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 21 00:03:37.048: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 21 00:03:37.048: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 21 00:03:37.048: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 21 00:03:37.048: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 21 00:03:37.048: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 21 00:03:37.048: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 21 00:03:37.048: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884608000, memFraction: 0.0028227394500034346 May 21 00:03:37.064: INFO: Waiting for running... May 21 00:03:37.065: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 00:03:42.136: INFO: ComputeCPUMemFraction for node: node1 May 21 00:03:42.136: INFO: Pod for on the node: cmk-c5x47, Cpu: 200, Mem: 419430400 May 21 00:03:42.136: INFO: Pod for on the node: cmk-init-discover-node1-vkzkd, Cpu: 300, Mem: 629145600 May 21 00:03:42.136: INFO: Pod for on the node: kube-flannel-2blt7, Cpu: 150, Mem: 64000000 May 21 00:03:42.136: INFO: Pod for on the node: kube-multus-ds-amd64-krd6m, Cpu: 100, Mem: 94371840 May 21 00:03:42.136: INFO: Pod for on the node: kube-proxy-v8kzq, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-6c2f8, Cpu: 50, Mem: 64000000 May 21 00:03:42.136: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 21 00:03:42.136: INFO: Pod for on the node: node-feature-discovery-worker-rh55h, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: collectd-875j8, Cpu: 300, Mem: 629145600 May 21 00:03:42.136: INFO: Pod for on the node: node-exporter-czwvh, Cpu: 112, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 21 00:03:42.136: INFO: Pod for on the node: 1ddf52a6-9010-4f59-8b7b-8c10fd0eac1b-0, Cpu: 37563, Mem: 87680079872 May 21 00:03:42.136: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 21 00:03:42.136: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 21 00:03:42.136: INFO: ComputeCPUMemFraction for node: node2 May 21 00:03:42.136: INFO: Pod for on the node: cmk-9hxtl, Cpu: 200, Mem: 419430400 May 21 00:03:42.136: INFO: Pod for on the node: cmk-init-discover-node2-b7gw4, Cpu: 300, Mem: 629145600 May 21 00:03:42.136: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-5kbbc, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: kube-flannel-jpmpd, Cpu: 150, Mem: 64000000 May 21 00:03:42.136: INFO: Pod for on the node: kube-multus-ds-amd64-p22zp, Cpu: 100, Mem: 94371840 May 21 00:03:42.136: INFO: Pod for on the node: kube-proxy-rg2fp, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-66r9g, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 21 00:03:42.136: INFO: Pod for on the node: node-feature-discovery-worker-nphk9, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: collectd-h4pzk, Cpu: 300, Mem: 629145600 May 21 00:03:42.136: INFO: Pod for on the node: node-exporter-vm24n, Cpu: 112, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd, Cpu: 100, Mem: 209715200 May 21 00:03:42.136: INFO: Pod for on the node: c183d1d9-067a-434b-8634-bb0ba3a4a3d2-0, Cpu: 38013, Mem: 88949942272 May 21 00:03:42.136: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 21 00:03:42.136: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5284e059-498c-4411-a73d=testing-taint-value-196a7e6e-950b-4dee-96e4-0f6aa11ab3c9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-95f7e906-010d-46e0-a192=testing-taint-value-f5a8dc54-9c50-4622-a271-714079a4a89b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-53930e5c-aa53-406b-9475=testing-taint-value-66d3d13f-f02c-43bb-8e30-5e053e8d55ba:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bce90f48-1c0e-4731-981b=testing-taint-value-a71047da-0622-4e31-b187-e169fbef2738:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2a3ca001-0cc1-4b15-a971=testing-taint-value-1f917d03-707f-4578-b203-f8a3dcea0647:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-46812107-9e05-4762-9f78=testing-taint-value-95ba93b8-ba19-4dee-a70c-569191acf58a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c8bd6401-45f4-41e4-acaf=testing-taint-value-f9dd68a7-26f5-45f0-9aa5-c0f89711e55f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cad55e4e-7825-4d7d-9dfd=testing-taint-value-9b004f2b-31dd-45f9-8e0c-25de52783b3b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ba8cc8e5-b2d4-4eb2-96ab=testing-taint-value-72de49eb-ea0d-4cc8-8502-92525d2e3c39:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7a965873-9c1b-4251-ad2a=testing-taint-value-0071872b-b6da-4334-9c6b-877a65af9875:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-031f64b1-d0b7-4b62-b53d=testing-taint-value-968d9040-19c8-4418-acba-ab87b211f1e1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8fc5e79f-f8f3-46a0-8baf=testing-taint-value-0cc6285d-4afe-4332-9570-c308d8f22a1b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-35e02af6-ebbb-4c9c-9679=testing-taint-value-7b2f3448-7e6b-4879-a4e4-a17a9223edb3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2bd348fd-24d1-4fc2-a5c1=testing-taint-value-74035936-84a9-4b7a-be1d-5fcaa36468f2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d28b78cc-b65f-4f37-bd5b=testing-taint-value-407d7dcd-3f6a-468d-8a75-d2e2569b2479:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ae13756c-b1e5-4755-8e50=testing-taint-value-e51b106e-fd1c-49fe-9bd4-0008bd2fe38b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-552a7fb4-686c-49e3-83b3=testing-taint-value-7f770011-bbe4-4056-91c4-5334a3552dea:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6897c54a-23c3-4c17-b18d=testing-taint-value-1e0c8660-a0f7-4f54-89f1-af439e707fb2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e91c8d92-e829-41d3-b2be=testing-taint-value-58b4f316-ade0-4e29-804f-b69b305e3143:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a5bb14ba-2476-4b50-9477=testing-taint-value-34a9a4ac-c780-40f9-8ba6-75b93b300369:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-031f64b1-d0b7-4b62-b53d=testing-taint-value-968d9040-19c8-4418-acba-ab87b211f1e1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8fc5e79f-f8f3-46a0-8baf=testing-taint-value-0cc6285d-4afe-4332-9570-c308d8f22a1b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-35e02af6-ebbb-4c9c-9679=testing-taint-value-7b2f3448-7e6b-4879-a4e4-a17a9223edb3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2bd348fd-24d1-4fc2-a5c1=testing-taint-value-74035936-84a9-4b7a-be1d-5fcaa36468f2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d28b78cc-b65f-4f37-bd5b=testing-taint-value-407d7dcd-3f6a-468d-8a75-d2e2569b2479:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ae13756c-b1e5-4755-8e50=testing-taint-value-e51b106e-fd1c-49fe-9bd4-0008bd2fe38b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-552a7fb4-686c-49e3-83b3=testing-taint-value-7f770011-bbe4-4056-91c4-5334a3552dea:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6897c54a-23c3-4c17-b18d=testing-taint-value-1e0c8660-a0f7-4f54-89f1-af439e707fb2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e91c8d92-e829-41d3-b2be=testing-taint-value-58b4f316-ade0-4e29-804f-b69b305e3143:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a5bb14ba-2476-4b50-9477=testing-taint-value-34a9a4ac-c780-40f9-8ba6-75b93b300369:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5284e059-498c-4411-a73d=testing-taint-value-196a7e6e-950b-4dee-96e4-0f6aa11ab3c9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-95f7e906-010d-46e0-a192=testing-taint-value-f5a8dc54-9c50-4622-a271-714079a4a89b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-53930e5c-aa53-406b-9475=testing-taint-value-66d3d13f-f02c-43bb-8e30-5e053e8d55ba:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bce90f48-1c0e-4731-981b=testing-taint-value-a71047da-0622-4e31-b187-e169fbef2738:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2a3ca001-0cc1-4b15-a971=testing-taint-value-1f917d03-707f-4578-b203-f8a3dcea0647:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-46812107-9e05-4762-9f78=testing-taint-value-95ba93b8-ba19-4dee-a70c-569191acf58a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c8bd6401-45f4-41e4-acaf=testing-taint-value-f9dd68a7-26f5-45f0-9aa5-c0f89711e55f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cad55e4e-7825-4d7d-9dfd=testing-taint-value-9b004f2b-31dd-45f9-8e0c-25de52783b3b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ba8cc8e5-b2d4-4eb2-96ab=testing-taint-value-72de49eb-ea0d-4cc8-8502-92525d2e3c39:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7a965873-9c1b-4251-ad2a=testing-taint-value-0071872b-b6da-4334-9c6b-877a65af9875:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 21 00:03:57.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2447" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:80.580 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":11,"skipped":4775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 21 00:03:57.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 00:03:57.536: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 00:03:57.545: INFO: Waiting for terminating namespaces to be deleted... May 21 00:03:57.548: INFO: Logging pods the apiserver thinks is on node node1 before test May 21 00:03:57.559: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 21 00:03:57.559: INFO: Container nodereport ready: true, restart count 0 May 21 00:03:57.559: INFO: Container reconcile ready: true, restart count 0 May 21 00:03:57.559: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 21 00:03:57.559: INFO: Container discover ready: false, restart count 0 May 21 00:03:57.559: INFO: Container init ready: false, restart count 0 May 21 00:03:57.559: INFO: Container install ready: false, restart count 0 May 21 00:03:57.559: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container kube-flannel ready: true, restart count 3 May 21 00:03:57.559: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container kube-multus ready: true, restart count 1 May 21 00:03:57.559: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container kube-proxy ready: true, restart count 2 May 21 00:03:57.559: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 21 00:03:57.559: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container nginx-proxy ready: true, restart count 2 May 21 00:03:57.559: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container nfd-worker ready: true, restart count 0 May 21 00:03:57.559: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container kube-sriovdp ready: true, restart count 0 May 21 00:03:57.559: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 21 00:03:57.559: INFO: Container collectd ready: true, restart count 0 May 21 00:03:57.559: INFO: Container collectd-exporter ready: true, restart count 0 May 21 00:03:57.559: INFO: Container rbac-proxy ready: true, restart count 0 May 21 00:03:57.559: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 21 00:03:57.559: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 21 00:03:57.559: INFO: Container node-exporter ready: true, restart count 0 May 21 00:03:57.559: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 21 00:03:57.559: INFO: Container config-reloader ready: true, restart count 0 May 21 00:03:57.559: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 21 00:03:57.559: INFO: Container grafana ready: true, restart count 0 May 21 00:03:57.559: INFO: Container prometheus ready: true, restart count 1 May 21 00:03:57.559: INFO: with-tolerations from sched-priority-2447 started at 2022-05-21 00:03:42 +0000 UTC (1 container statuses recorded) May 21 00:03:57.559: INFO: Container with-tolerations ready: true, restart count 0 May 21 00:03:57.559: INFO: Logging pods the apiserver thinks is on node node2 before test May 21 00:03:57.569: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 21 00:03:57.569: INFO: Container nodereport ready: true, restart count 0 May 21 00:03:57.569: INFO: Container reconcile ready: true, restart count 0 May 21 00:03:57.569: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 21 00:03:57.569: INFO: Container discover ready: false, restart count 0 May 21 00:03:57.569: INFO: Container init ready: false, restart count 0 May 21 00:03:57.569: INFO: Container install ready: false, restart count 0 May 21 00:03:57.569: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container cmk-webhook ready: true, restart count 0 May 21 00:03:57.569: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container kube-flannel ready: true, restart count 2 May 21 00:03:57.569: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container kube-multus ready: true, restart count 1 May 21 00:03:57.569: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container kube-proxy ready: true, restart count 2 May 21 00:03:57.569: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 21 00:03:57.569: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container nginx-proxy ready: true, restart count 2 May 21 00:03:57.569: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container nfd-worker ready: true, restart count 0 May 21 00:03:57.569: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 21 00:03:57.569: INFO: Container kube-sriovdp ready: true, restart count 0 May 21 00:03:57.569: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 21 00:03:57.570: INFO: Container collectd ready: true, restart count 0 May 21 00:03:57.570: INFO: Container collectd-exporter ready: true, restart count 0 May 21 00:03:57.570: INFO: Container rbac-proxy ready: true, restart count 0 May 21 00:03:57.570: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 21 00:03:57.570: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 21 00:03:57.570: INFO: Container node-exporter ready: true, restart count 0 May 21 00:03:57.570: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 21 00:03:57.570: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f0f5e76fdaa3da], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 21 00:03:58.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-367" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":12,"skipped":5540,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 21 00:03:58.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 21 00:03:58.646: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 21 00:03:58.654: INFO: Waiting for terminating namespaces to be deleted... May 21 00:03:58.657: INFO: Logging pods the apiserver thinks is on node node1 before test May 21 00:03:58.668: INFO: cmk-c5x47 from kube-system started at 2022-05-20 20:16:15 +0000 UTC (2 container statuses recorded) May 21 00:03:58.668: INFO: Container nodereport ready: true, restart count 0 May 21 00:03:58.668: INFO: Container reconcile ready: true, restart count 0 May 21 00:03:58.668: INFO: cmk-init-discover-node1-vkzkd from kube-system started at 2022-05-20 20:15:33 +0000 UTC (3 container statuses recorded) May 21 00:03:58.668: INFO: Container discover ready: false, restart count 0 May 21 00:03:58.668: INFO: Container init ready: false, restart count 0 May 21 00:03:58.668: INFO: Container install ready: false, restart count 0 May 21 00:03:58.668: INFO: kube-flannel-2blt7 from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container kube-flannel ready: true, restart count 3 May 21 00:03:58.668: INFO: kube-multus-ds-amd64-krd6m from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container kube-multus ready: true, restart count 1 May 21 00:03:58.668: INFO: kube-proxy-v8kzq from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container kube-proxy ready: true, restart count 2 May 21 00:03:58.668: INFO: kubernetes-dashboard-785dcbb76d-6c2f8 from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 21 00:03:58.668: INFO: nginx-proxy-node1 from kube-system started at 2022-05-20 20:06:57 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container nginx-proxy ready: true, restart count 2 May 21 00:03:58.668: INFO: node-feature-discovery-worker-rh55h from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container nfd-worker ready: true, restart count 0 May 21 00:03:58.668: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qn9gl from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container kube-sriovdp ready: true, restart count 0 May 21 00:03:58.668: INFO: collectd-875j8 from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 21 00:03:58.668: INFO: Container collectd ready: true, restart count 0 May 21 00:03:58.668: INFO: Container collectd-exporter ready: true, restart count 0 May 21 00:03:58.668: INFO: Container rbac-proxy ready: true, restart count 0 May 21 00:03:58.668: INFO: node-exporter-czwvh from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 21 00:03:58.668: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 21 00:03:58.668: INFO: Container node-exporter ready: true, restart count 0 May 21 00:03:58.668: INFO: prometheus-k8s-0 from monitoring started at 2022-05-20 20:17:30 +0000 UTC (4 container statuses recorded) May 21 00:03:58.668: INFO: Container config-reloader ready: true, restart count 0 May 21 00:03:58.668: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 21 00:03:58.668: INFO: Container grafana ready: true, restart count 0 May 21 00:03:58.668: INFO: Container prometheus ready: true, restart count 1 May 21 00:03:58.668: INFO: with-tolerations from sched-priority-2447 started at 2022-05-21 00:03:42 +0000 UTC (1 container statuses recorded) May 21 00:03:58.668: INFO: Container with-tolerations ready: true, restart count 0 May 21 00:03:58.668: INFO: Logging pods the apiserver thinks is on node node2 before test May 21 00:03:58.688: INFO: cmk-9hxtl from kube-system started at 2022-05-20 20:16:16 +0000 UTC (2 container statuses recorded) May 21 00:03:58.688: INFO: Container nodereport ready: true, restart count 0 May 21 00:03:58.688: INFO: Container reconcile ready: true, restart count 0 May 21 00:03:58.688: INFO: cmk-init-discover-node2-b7gw4 from kube-system started at 2022-05-20 20:15:53 +0000 UTC (3 container statuses recorded) May 21 00:03:58.688: INFO: Container discover ready: false, restart count 0 May 21 00:03:58.688: INFO: Container init ready: false, restart count 0 May 21 00:03:58.688: INFO: Container install ready: false, restart count 0 May 21 00:03:58.688: INFO: cmk-webhook-6c9d5f8578-5kbbc from kube-system started at 2022-05-20 20:16:16 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container cmk-webhook ready: true, restart count 0 May 21 00:03:58.688: INFO: kube-flannel-jpmpd from kube-system started at 2022-05-20 20:04:10 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container kube-flannel ready: true, restart count 2 May 21 00:03:58.688: INFO: kube-multus-ds-amd64-p22zp from kube-system started at 2022-05-20 20:04:18 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container kube-multus ready: true, restart count 1 May 21 00:03:58.688: INFO: kube-proxy-rg2fp from kube-system started at 2022-05-20 20:03:14 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container kube-proxy ready: true, restart count 2 May 21 00:03:58.688: INFO: kubernetes-metrics-scraper-5558854cb-66r9g from kube-system started at 2022-05-20 20:04:50 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 21 00:03:58.688: INFO: nginx-proxy-node2 from kube-system started at 2022-05-20 20:03:09 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container nginx-proxy ready: true, restart count 2 May 21 00:03:58.688: INFO: node-feature-discovery-worker-nphk9 from kube-system started at 2022-05-20 20:11:58 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container nfd-worker ready: true, restart count 0 May 21 00:03:58.688: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-wl7nk from kube-system started at 2022-05-20 20:13:08 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container kube-sriovdp ready: true, restart count 0 May 21 00:03:58.688: INFO: collectd-h4pzk from monitoring started at 2022-05-20 20:21:17 +0000 UTC (3 container statuses recorded) May 21 00:03:58.688: INFO: Container collectd ready: true, restart count 0 May 21 00:03:58.688: INFO: Container collectd-exporter ready: true, restart count 0 May 21 00:03:58.688: INFO: Container rbac-proxy ready: true, restart count 0 May 21 00:03:58.688: INFO: node-exporter-vm24n from monitoring started at 2022-05-20 20:17:20 +0000 UTC (2 container statuses recorded) May 21 00:03:58.688: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 21 00:03:58.688: INFO: Container node-exporter ready: true, restart count 0 May 21 00:03:58.688: INFO: tas-telemetry-aware-scheduling-84ff454dfb-ddzzd from monitoring started at 2022-05-20 20:20:26 +0000 UTC (1 container statuses recorded) May 21 00:03:58.688: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5ea0cf4188b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5ea7842a53f], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5eb7ad7afb8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1869/filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5ec07550ff8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5ec1ce1cfaa], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 361.537143ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5ec2467e2e5], Reason = [Created], Message = [Created container filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1] STEP: Considering event: Type = [Normal], Name = [filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1.16f0f5ec2ba10b9d], Reason = [Started], Message = [Started container filler-pod-23a2c391-ab4c-4aa7-92ec-0d6cf66acfb1] STEP: Considering event: Type = [Normal], Name = [without-label.16f0f5e91be94ba4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1869/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16f0f5e9712a9146], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16f0f5e988b03d1f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 394.630204ms] STEP: Considering event: Type = [Normal], Name = [without-label.16f0f5e98fa88b82], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f0f5e9960ee8f0], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f0f5ea0c28a8b5], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.16f0f5ea10275fbe], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-x96p6" : object "sched-pred-1869"/"kube-root-ca.crt" not registered] STEP: Considering event: Type = [Warning], Name = [additional-pod6a828319-47f6-45c4-af3f-3f85d2a71b2f.16f0f5ec625f2802], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 21 00:04:19.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1869" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.265 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":13,"skipped":5551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 21 00:04:19.893: INFO: Running AfterSuite actions on all nodes May 21 00:04:19.893: INFO: Running AfterSuite actions on node 1 May 21 00:04:19.893: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 553.084 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 9m14.461009715s Test Suite Passed