I0522 01:07:13.520612 23 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0522 01:07:13.520736 23 e2e.go:129] Starting e2e run "2ecde7ad-f0d5-4ec5-a94f-282a2d7c51ac" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621645632 - Will randomize all specs Will run 12 of 5484 specs May 22 01:07:13.553: INFO: >>> kubeConfig: /root/.kube/config May 22 01:07:13.555: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 22 01:07:13.582: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 01:07:13.645: INFO: The status of Pod cmk-init-discover-node1-48g7j is Succeeded, skipping waiting May 22 01:07:13.645: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 01:07:13.645: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 22 01:07:13.645: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 22 01:07:13.656: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 22 01:07:13.656: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 22 01:07:13.656: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 22 01:07:13.656: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 22 01:07:13.656: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 22 01:07:13.656: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 22 01:07:13.656: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 22 01:07:13.656: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 22 01:07:13.656: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 22 01:07:13.656: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 22 01:07:13.656: INFO: e2e test version: v1.19.10 May 22 01:07:13.656: INFO: kube-apiserver version: v1.19.8 May 22 01:07:13.657: INFO: >>> kubeConfig: /root/.kube/config May 22 01:07:13.661: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:07:13.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 22 01:07:13.687: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 22 01:07:13.690: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:07:13.692: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:07:13.701: INFO: Waiting for terminating namespaces to be deleted... May 22 01:07:13.703: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:07:13.714: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:07:13.714: INFO: Container nodereport ready: true, restart count 0 May 22 01:07:13.714: INFO: Container reconcile ready: true, restart count 0 May 22 01:07:13.714: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:07:13.714: INFO: Container discover ready: false, restart count 0 May 22 01:07:13.714: INFO: Container init ready: false, restart count 0 May 22 01:07:13.714: INFO: Container install ready: false, restart count 0 May 22 01:07:13.714: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:07:13.714: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:07:13.714: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container kube-multus ready: true, restart count 1 May 22 01:07:13.714: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:07:13.714: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:07:13.714: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:07:13.714: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:07:13.714: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:07:13.714: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:07:13.714: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:07:13.714: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:07:13.714: INFO: Container collectd ready: true, restart count 0 May 22 01:07:13.714: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:07:13.714: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:07:13.714: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:07:13.714: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:13.714: INFO: Container node-exporter ready: true, restart count 0 May 22 01:07:13.714: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:07:13.714: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:07:13.714: INFO: Container grafana ready: true, restart count 0 May 22 01:07:13.714: INFO: Container prometheus ready: true, restart count 1 May 22 01:07:13.714: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:07:13.714: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:07:13.714: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:07:13.714: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:13.714: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:07:13.714: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:07:13.714: INFO: Container tas-controller ready: true, restart count 0 May 22 01:07:13.714: INFO: Container tas-extender ready: true, restart count 0 May 22 01:07:13.714: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:07:13.720: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:07:13.720: INFO: Container nodereport ready: true, restart count 0 May 22 01:07:13.720: INFO: Container reconcile ready: true, restart count 0 May 22 01:07:13.720: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:07:13.720: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:07:13.720: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:07:13.720: INFO: Container kube-multus ready: true, restart count 1 May 22 01:07:13.720: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:07:13.720: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:07:13.720: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:07:13.720: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:07:13.720: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:07:13.720: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:07:13.720: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:07:13.720: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:07:13.720: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:07:13.720: INFO: Container collectd ready: true, restart count 0 May 22 01:07:13.720: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:07:13.720: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:07:13.720: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:07:13.720: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:13.720: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16813e1caf371572], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16813e1caf80a0dc], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:07:14.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8333" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":1,"skipped":624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:07:14.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:07:14.786: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:07:14.795: INFO: Waiting for terminating namespaces to be deleted... May 22 01:07:14.797: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:07:14.815: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:07:14.815: INFO: Container nodereport ready: true, restart count 0 May 22 01:07:14.815: INFO: Container reconcile ready: true, restart count 0 May 22 01:07:14.815: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:07:14.815: INFO: Container discover ready: false, restart count 0 May 22 01:07:14.815: INFO: Container init ready: false, restart count 0 May 22 01:07:14.815: INFO: Container install ready: false, restart count 0 May 22 01:07:14.815: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:07:14.815: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:07:14.816: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:07:14.816: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container kube-multus ready: true, restart count 1 May 22 01:07:14.816: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:07:14.816: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:07:14.816: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:07:14.816: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:07:14.816: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:07:14.816: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:07:14.816: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:07:14.816: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:07:14.816: INFO: Container collectd ready: true, restart count 0 May 22 01:07:14.816: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:07:14.816: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:07:14.816: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:07:14.816: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:14.816: INFO: Container node-exporter ready: true, restart count 0 May 22 01:07:14.816: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:07:14.816: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:07:14.816: INFO: Container grafana ready: true, restart count 0 May 22 01:07:14.816: INFO: Container prometheus ready: true, restart count 1 May 22 01:07:14.816: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:07:14.816: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:07:14.816: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:07:14.816: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:14.816: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:07:14.816: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:07:14.816: INFO: Container tas-controller ready: true, restart count 0 May 22 01:07:14.816: INFO: Container tas-extender ready: true, restart count 0 May 22 01:07:14.816: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:07:14.832: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:07:14.832: INFO: Container nodereport ready: true, restart count 0 May 22 01:07:14.832: INFO: Container reconcile ready: true, restart count 0 May 22 01:07:14.832: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:07:14.832: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:07:14.832: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:07:14.832: INFO: Container kube-multus ready: true, restart count 1 May 22 01:07:14.832: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:07:14.832: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:07:14.832: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:07:14.832: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:07:14.832: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:07:14.832: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:07:14.832: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:07:14.832: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:07:14.832: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:07:14.832: INFO: Container collectd ready: true, restart count 0 May 22 01:07:14.832: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:07:14.832: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:07:14.832: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:07:14.832: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:14.832: INFO: Container node-exporter ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-4fd14693-2fea-4aaa-9545-4b97d15a7886 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1cf0370661], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4377/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d452d7ddc], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.39/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d45eeae8b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d6ae55b76], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 620.130235ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d71c2d58d], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d778f13f7], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1ddfc3889d], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [without-toleration.16813e1de06e89c8], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-nm888" : object "sched-pred-4377"/"default-token-nm888" not registered] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16813e1de1a347d5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16813e1de1ed81aa], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16813e1de1a347d5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16813e1de1ed81aa], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1cf0370661], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4377/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d452d7ddc], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.39/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d45eeae8b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d6ae55b76], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 620.130235ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d71c2d58d], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1d778f13f7], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16813e1ddfc3889d], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [without-toleration.16813e1de06e89c8], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-nm888" : object "sched-pred-4377"/"default-token-nm888" not registered] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16813e1e5f34c04c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4377/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-4fd14693-2fea-4aaa-9545-4b97d15a7886 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-4fd14693-2fea-4aaa-9545-4b97d15a7886 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e5da3aa1-06e1-41c9-8a2d-e7c4df4bcb0f=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:07:21.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4377" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.192 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":2,"skipped":777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:07:21.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:07:21.980: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:07:21.987: INFO: Waiting for terminating namespaces to be deleted... May 22 01:07:21.989: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:07:21.996: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:07:21.997: INFO: Container nodereport ready: true, restart count 0 May 22 01:07:21.997: INFO: Container reconcile ready: true, restart count 0 May 22 01:07:21.997: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:07:21.997: INFO: Container discover ready: false, restart count 0 May 22 01:07:21.997: INFO: Container init ready: false, restart count 0 May 22 01:07:21.997: INFO: Container install ready: false, restart count 0 May 22 01:07:21.997: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:07:21.997: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:07:21.997: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container kube-multus ready: true, restart count 1 May 22 01:07:21.997: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:07:21.997: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:07:21.997: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:07:21.997: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:07:21.997: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:07:21.997: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:07:21.997: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:07:21.997: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:07:21.997: INFO: Container collectd ready: true, restart count 0 May 22 01:07:21.997: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:07:21.997: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:07:21.997: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:07:21.997: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:21.997: INFO: Container node-exporter ready: true, restart count 0 May 22 01:07:21.997: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:07:21.997: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:07:21.997: INFO: Container grafana ready: true, restart count 0 May 22 01:07:21.997: INFO: Container prometheus ready: true, restart count 1 May 22 01:07:21.997: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:07:21.997: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:07:21.997: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:07:21.997: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:21.997: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:07:21.997: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:07:21.997: INFO: Container tas-controller ready: true, restart count 0 May 22 01:07:21.997: INFO: Container tas-extender ready: true, restart count 0 May 22 01:07:21.997: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:07:22.013: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:07:22.013: INFO: Container nodereport ready: true, restart count 0 May 22 01:07:22.013: INFO: Container reconcile ready: true, restart count 0 May 22 01:07:22.013: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:07:22.013: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container kube-multus ready: true, restart count 1 May 22 01:07:22.013: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:07:22.013: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:07:22.013: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:07:22.013: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:07:22.013: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:07:22.013: INFO: Container collectd ready: true, restart count 0 May 22 01:07:22.013: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:07:22.013: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:07:22.013: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:07:22.013: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:07:22.013: INFO: Container node-exporter ready: true, restart count 0 May 22 01:07:22.013: INFO: still-no-tolerations from sched-pred-4377 started at 2021-05-22 01:07:21 +0000 UTC (1 container statuses recorded) May 22 01:07:22.013: INFO: Container still-no-tolerations ready: false, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 22 01:07:22.061: INFO: Pod cmk-h8jxp requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod cmk-webhook-6c9d5f8578-8pz6w requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod cmk-xtrv9 requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod kube-flannel-5p7gq requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod kube-flannel-k6mr4 requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod kube-multus-ds-amd64-6q46t requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod kube-multus-ds-amd64-wlmhr requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod kube-proxy-h5k9s requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod kube-proxy-q57hf requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod kubernetes-dashboard-86c6f9df5b-8rsws requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod kubernetes-metrics-scraper-678c97765c-nnrtl requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod node-feature-discovery-worker-lh5hz requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod node-feature-discovery-worker-z827f requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod collectd-mc5kl requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod collectd-rkmjk requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod node-exporter-jctsz requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Pod node-exporter-l5k2r requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod prometheus-operator-5bb8cb9d8f-mzlrf requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k requesting local ephemeral resource =0 on Node node1 May 22 01:07:22.061: INFO: Pod still-no-tolerations requesting local ephemeral resource =0 on Node node2 May 22 01:07:22.061: INFO: Using pod capacity: 40542413347 May 22 01:07:22.061: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 May 22 01:07:22.061: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 22 01:07:22.254: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16813e1e9f1f239d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16813e203e5af154], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.44/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16813e207b50faee], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16813e20b761d207], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.007727498s] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16813e20be4ad13a], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16813e20c50ab448], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16813e1e9fb55807], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16813e20b31497ed], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.49/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16813e20b3d965c3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16813e20d2a424fa], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 516.593807ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16813e20d9274ca6], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16813e20df9b682d], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16813e1ea4b26e1f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16813e20a9c3fd48], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.71/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16813e20aac8648a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16813e20c819882c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 491.848971ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16813e20e2fc6004], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16813e20e9b0b4ae], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16813e1ea5482e37], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16813e1f5f68c992], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.68/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16813e1f604c4afb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16813e1f80fdb6e2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 548.48874ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16813e20d5d86b39], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16813e20e18c7ed9], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16813e1ea5daebfe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16813e20abbb4a83], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.74/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16813e20d4679522], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16813e21223c7a73], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.305791225s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16813e2129a267cd], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16813e212fd1b786], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16813e1ea658f3b3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16813e20d9aa2237], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.76/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16813e20da9b55bc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16813e2175fc42fc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.606813217s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16813e217d2eb941], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16813e21842c356a], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16813e1ea6ea933c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16813e20da4db341], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.77/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16813e20dae94d7c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16813e21913c468e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.058882936s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16813e2198f19d73], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16813e219f6fd677], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16813e1ea7812a42], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16813e20ab775005], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.72/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16813e20d44cb438], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16813e2104c3ae63], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 813.096697ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16813e210c44259d], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16813e21180e49c8], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16813e1ea816866a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16813e2001a5d1a0], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.69/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16813e20504d764a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16813e208496548f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 877.181463ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16813e20e057d47f], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16813e20e82c4824], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16813e1ea8a94dc9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16813e20d1ee314d], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.75/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16813e20d6f2b639], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16813e213e7a653f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.736933127s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16813e214915326f], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16813e214f03e82a], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16813e1ea9381b3e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16813e20d3fe8646], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.73/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16813e20d8b541d8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16813e215c26e44e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.205255032s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16813e21642cccc9], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16813e216a585327], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16813e1ea9d0f148], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16813e20a9b2d289], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.70/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16813e20ab4ce5a8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16813e20e82a92eb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.021138369s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16813e20efa4e9a8], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16813e20f5855ee7], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16813e1ea0458748], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16813e20b6f8286d], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.47/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16813e20b94bb005], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16813e21459c3b1a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.354078445s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16813e214c1ade89], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16813e2151e62bbe], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16813e1ea0d1f898], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16813e20b738b295], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.50/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16813e20b876caf6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16813e210cbd8034], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.4139114s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16813e21140879a8], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16813e21271aeb3c], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16813e1ea166f9d4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16813e1fa7039d7c], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.41/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16813e1fa7d33f90], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16813e1fcb3fa29e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 594.297478ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16813e1ff39c4f6e], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16813e206f69797e], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16813e1ea1e7f3e7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16813e20b6d8a3b4], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.48/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16813e20b980a7e9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16813e216385ce80], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.85245745s] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16813e216a156b82], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16813e2170158bdb], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16813e1ea280e816], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16813e203e34057d], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.43/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16813e207b50da52], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16813e209ae2613f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 529.62226ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16813e20b55d9075], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16813e20bf61a3b4], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16813e1ea30300cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16813e1fa7104d16], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.42/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16813e1fa7d7f453], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16813e1feba2d710], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.137364895s] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16813e2001521fe2], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16813e2089b5de09], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16813e1ea3a12487], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16813e20b743c75c], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.46/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16813e20b86c239c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16813e20ef94c5a4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 925.402358ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16813e20f6c35e1a], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16813e21028f7b7e], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16813e1ea42049df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8793/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16813e20b750b4d3], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.45/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16813e20b8883c4b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16813e2129f4c572], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.902930489s] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16813e213175073a], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16813e21377f4849], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16813e222c531d2b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16813e222ca760b5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:07:38.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8793" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.387 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":3,"skipped":847,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:07:38.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 22 01:07:38.368: INFO: Waiting up to 1m0s for all nodes to be ready May 22 01:08:38.425: INFO: Waiting for terminating namespaces to be deleted... May 22 01:08:38.427: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 01:08:38.446: INFO: The status of Pod cmk-init-discover-node1-48g7j is Succeeded, skipping waiting May 22 01:08:38.446: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 01:08:38.446: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 May 22 01:08:46.521: INFO: ComputeCPUMemFraction for node: node2 May 22 01:08:46.535: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:08:46.535: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:08:46.535: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:08:46.535: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:08:46.535: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:08:46.535: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:08:46.535: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:08:46.535: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:08:46.535: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:08:46.535: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 22 01:08:46.535: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 22 01:08:46.535: INFO: ComputeCPUMemFraction for node: node1 May 22 01:08:46.551: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:08:46.551: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:08:46.551: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:08:46.551: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:08:46.551: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:08:46.551: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:08:46.551: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:08:46.551: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:08:46.551: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:08:46.551: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:08:46.551: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:08:46.551: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:08:46.551: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:08:46.551: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:08:46.551: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:08:46.551: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:08:46.551: INFO: Node: node1, totalRequestedCPUResource: 1237, cpuAllocatableMil: 77000, cpuFraction: 0.016064935064935063 May 22 01:08:46.551: INFO: Node: node1, totalRequestedMemResource: 2089379840, memAllocatableVal: 178884628480, memFraction: 0.011680041252027425 May 22 01:08:46.562: INFO: Waiting for running... May 22 01:08:51.627: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:08:56.679: INFO: ComputeCPUMemFraction for node: node2 May 22 01:08:56.696: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:08:56.696: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:08:56.696: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:08:56.696: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:08:56.696: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:08:56.696: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:08:56.696: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:08:56.696: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:08:56.696: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:08:56.696: INFO: Pod for on the node: c444491e-2327-4dba-9016-4b3c45af1f6b-0, Cpu: 38013, Mem: 88937371648 May 22 01:08:56.696: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 22 01:08:56.696: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:08:56.696: INFO: ComputeCPUMemFraction for node: node1 May 22 01:08:56.710: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:08:56.710: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:08:56.710: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:08:56.710: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:08:56.710: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:08:56.710: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:08:56.710: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:08:56.710: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:08:56.710: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:08:56.710: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:08:56.710: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:08:56.710: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:08:56.710: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:08:56.710: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:08:56.710: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:08:56.710: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:08:56.710: INFO: Pod for on the node: 60d6d862-8c1e-4a52-b4f4-8467327917ed-0, Cpu: 37263, Mem: 87352934400 May 22 01:08:56.710: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 22 01:08:56.710: INFO: Node: node1, totalRequestedMemResource: 89442314240, memAllocatableVal: 178884628480, memFraction: 0.5 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:09:18.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5377" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:100.436 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":4,"skipped":869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:09:18.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 22 01:09:18.814: INFO: Waiting up to 1m0s for all nodes to be ready May 22 01:10:18.863: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:11:03.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3579" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:104.359 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":5,"skipped":1030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:11:03.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 22 01:11:03.179: INFO: Waiting up to 1m0s for all nodes to be ready May 22 01:12:03.231: INFO: Waiting for terminating namespaces to be deleted... May 22 01:12:03.233: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 01:12:03.253: INFO: The status of Pod cmk-init-discover-node1-48g7j is Succeeded, skipping waiting May 22 01:12:03.253: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 01:12:03.253: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 22 01:12:07.280: INFO: ComputeCPUMemFraction for node: node1 May 22 01:12:07.296: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:12:07.296: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:12:07.296: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:12:07.296: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:12:07.296: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:12:07.296: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:12:07.296: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:12:07.296: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:12:07.296: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:12:07.296: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:12:07.296: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:12:07.296: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:12:07.296: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:12:07.296: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:12:07.296: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:12:07.297: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:12:07.297: INFO: Node: node1, totalRequestedCPUResource: 1237, cpuAllocatableMil: 77000, cpuFraction: 0.016064935064935063 May 22 01:12:07.297: INFO: Node: node1, totalRequestedMemResource: 2089379840, memAllocatableVal: 178884628480, memFraction: 0.011680041252027425 May 22 01:12:07.297: INFO: ComputeCPUMemFraction for node: node2 May 22 01:12:07.313: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:12:07.313: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:12:07.313: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:12:07.313: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:12:07.313: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:12:07.313: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:12:07.313: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:12:07.313: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:12:07.313: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:12:07.313: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 22 01:12:07.313: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 22 01:12:07.313: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 22 01:12:07.325: INFO: Waiting for running... May 22 01:12:12.387: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:12:17.437: INFO: ComputeCPUMemFraction for node: node1 May 22 01:12:17.452: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:12:17.452: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:12:17.452: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:12:17.452: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:12:17.452: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:12:17.452: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:12:17.452: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:12:17.452: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:12:17.453: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:12:17.453: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:12:17.453: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:12:17.453: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:12:17.453: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:12:17.453: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:12:17.453: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:12:17.453: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:12:17.453: INFO: Pod for on the node: a0c6da2f-4ebb-4b02-9fcb-c6d561273a27-0, Cpu: 44962, Mem: 105241397248 May 22 01:12:17.453: INFO: Node: node1, totalRequestedCPUResource: 46199, cpuAllocatableMil: 77000, cpuFraction: 0.599987012987013 May 22 01:12:17.453: INFO: Node: node1, totalRequestedMemResource: 107330777088, memAllocatableVal: 178884628480, memFraction: 0.6 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:12:17.453: INFO: ComputeCPUMemFraction for node: node2 May 22 01:12:17.469: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:12:17.469: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:12:17.469: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:12:17.469: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:12:17.469: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:12:17.469: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:12:17.469: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:12:17.469: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:12:17.469: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:12:17.469: INFO: Pod for on the node: e6585a4d-f145-4c32-84f3-4e842174016c-0, Cpu: 45713, Mem: 106825834905 May 22 01:12:17.469: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 22 01:12:17.469: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 22 01:12:17.469: INFO: Node: node2, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:12:27.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4414" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:84.353 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":6,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:12:27.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:12:27.541: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:12:27.550: INFO: Waiting for terminating namespaces to be deleted... May 22 01:12:27.553: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:12:27.570: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:12:27.571: INFO: Container nodereport ready: true, restart count 0 May 22 01:12:27.571: INFO: Container reconcile ready: true, restart count 0 May 22 01:12:27.571: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:12:27.571: INFO: Container discover ready: false, restart count 0 May 22 01:12:27.571: INFO: Container init ready: false, restart count 0 May 22 01:12:27.571: INFO: Container install ready: false, restart count 0 May 22 01:12:27.571: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:12:27.571: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:12:27.571: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container kube-multus ready: true, restart count 1 May 22 01:12:27.571: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:12:27.571: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:12:27.571: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:12:27.571: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:12:27.571: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:12:27.571: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:12:27.571: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:12:27.571: INFO: Container collectd ready: true, restart count 0 May 22 01:12:27.571: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:12:27.571: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:12:27.571: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:12:27.571: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:12:27.571: INFO: Container node-exporter ready: true, restart count 0 May 22 01:12:27.571: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:12:27.571: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:12:27.571: INFO: Container grafana ready: true, restart count 0 May 22 01:12:27.571: INFO: Container prometheus ready: true, restart count 1 May 22 01:12:27.571: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:12:27.571: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:12:27.571: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:12:27.571: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:12:27.571: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:12:27.571: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:12:27.571: INFO: Container tas-controller ready: true, restart count 0 May 22 01:12:27.571: INFO: Container tas-extender ready: true, restart count 0 May 22 01:12:27.571: INFO: pod-with-pod-antiaffinity from sched-priority-4414 started at 2021-05-22 01:12:18 +0000 UTC (1 container statuses recorded) May 22 01:12:27.571: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 May 22 01:12:27.571: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:12:27.585: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:12:27.585: INFO: Container nodereport ready: true, restart count 0 May 22 01:12:27.585: INFO: Container reconcile ready: true, restart count 0 May 22 01:12:27.585: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:12:27.585: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container kube-multus ready: true, restart count 1 May 22 01:12:27.585: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:12:27.585: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:12:27.585: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:12:27.585: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:12:27.585: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:12:27.585: INFO: Container collectd ready: true, restart count 0 May 22 01:12:27.585: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:12:27.585: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:12:27.585: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:12:27.585: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:12:27.585: INFO: Container node-exporter ready: true, restart count 0 May 22 01:12:27.585: INFO: pod-with-label-security-s1 from sched-priority-4414 started at 2021-05-22 01:12:03 +0000 UTC (1 container statuses recorded) May 22 01:12:27.585: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:12:41.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6238" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":7,"skipped":2628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:12:41.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 22 01:12:41.720: INFO: Waiting up to 1m0s for all nodes to be ready May 22 01:13:41.768: INFO: Waiting for terminating namespaces to be deleted... May 22 01:13:41.770: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 01:13:41.786: INFO: The status of Pod cmk-init-discover-node1-48g7j is Succeeded, skipping waiting May 22 01:13:41.786: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 01:13:41.786: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 May 22 01:13:41.786: INFO: ComputeCPUMemFraction for node: node1 May 22 01:13:41.801: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:13:41.801: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:13:41.801: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:13:41.801: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:13:41.802: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:13:41.802: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:13:41.802: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:13:41.802: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:13:41.802: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:13:41.802: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:13:41.802: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:13:41.802: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:13:41.802: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:13:41.802: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:13:41.802: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:13:41.802: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:13:41.802: INFO: Node: node1, totalRequestedCPUResource: 1237, cpuAllocatableMil: 77000, cpuFraction: 0.016064935064935063 May 22 01:13:41.802: INFO: Node: node1, totalRequestedMemResource: 2089379840, memAllocatableVal: 178884628480, memFraction: 0.011680041252027425 May 22 01:13:41.802: INFO: ComputeCPUMemFraction for node: node2 May 22 01:13:41.818: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:13:41.818: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:13:41.818: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:13:41.818: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:13:41.818: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:13:41.818: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:13:41.818: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:13:41.818: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:13:41.818: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:13:41.818: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 22 01:13:41.818: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 22 01:13:41.831: INFO: Waiting for running... May 22 01:13:46.894: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:13:51.945: INFO: ComputeCPUMemFraction for node: node1 May 22 01:13:51.962: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:13:51.962: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:13:51.962: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:13:51.962: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:13:51.962: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:13:51.962: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:13:51.962: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:13:51.962: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:13:51.962: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:13:51.962: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:13:51.962: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:13:51.963: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:13:51.963: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:13:51.963: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:13:51.963: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:13:51.963: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:13:51.963: INFO: Pod for on the node: bb39f19a-9e38-474c-8e7d-362f06f7032e-0, Cpu: 37263, Mem: 87352934400 May 22 01:13:51.963: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 22 01:13:51.963: INFO: Node: node1, totalRequestedMemResource: 89442314240, memAllocatableVal: 178884628480, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:13:51.963: INFO: ComputeCPUMemFraction for node: node2 May 22 01:13:51.977: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:13:51.977: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:13:51.977: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:13:51.977: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:13:51.977: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:13:51.977: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:13:51.977: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:13:51.977: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:13:51.977: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:13:51.977: INFO: Pod for on the node: fffc7094-4275-4001-8eb5-19fd57e4b70b-0, Cpu: 38013, Mem: 88937371648 May 22 01:13:51.977: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 22 01:13:51.977: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9d05fde5-b468-4de3-b871-c265d2ef4657=testing-taint-value-cfd7ea40-3741-46a9-bf71-f2b53fd4390d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-74f71a89-6d70-4924-ae0a-1862f1787d1d=testing-taint-value-f35eb22d-0a34-41f7-9932-54c61af5aaf0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-85507015-ff2f-4424-b3bd-7076a1189cdb=testing-taint-value-d7758ccb-7a90-4d0f-aa8a-346163f87cc9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8d7025b8-f7de-40df-8a99-2aaa68da967b=testing-taint-value-6ba9f00d-62ec-422e-b1a3-46253d5f337a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-46e0fad8-af3c-4d4c-87e5-460de8affdd5=testing-taint-value-6de071a3-470e-4dec-81bd-0b2472d878d5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7173bc69-2265-448e-b4cc-3b5e445e959c=testing-taint-value-0cd2be66-0559-41d4-b5a4-88acfa431176:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-88b2a263-e490-41e6-9aa5-7267670e16ec=testing-taint-value-a17b3da1-a4e6-4896-bdea-4ab1f7a95b7d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-99fe5cf4-408e-4179-94d4-a85dfe6c74f6=testing-taint-value-f5852bcb-9a5d-49f5-9b61-4c9a76a5b1b7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d5e96348-5597-43dc-a3ff-980658e5c675=testing-taint-value-9442617c-6002-42d4-9d4a-55ff363fe3fd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-786ecb45-ecb5-4937-92b1-4d89f0bde1cf=testing-taint-value-7a3c8933-7cb4-40d3-a3f4-ad4bcef7ce7a:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-bdc16cbe-dd4a-471d-9144-9b9044557a1e=testing-taint-value-dc6d0a5d-d9db-4b62-b140-c5832d3f935f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-93c1bf44-14ed-4293-8ce3-3261b691ab28=testing-taint-value-13f78280-7e05-4724-9e8b-a3282043649c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2dcae2c3-ef57-4a1a-94d2-c075d40b75d8=testing-taint-value-279b98b5-218b-454a-aa65-479569c35189:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6df5b44c-dbf4-4142-b932-2ae778eb2d93=testing-taint-value-b04bfbc9-d191-4c68-9cf2-a0dbba3cc0d1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-590b942f-1a46-485b-98e9-86cddd16823e=testing-taint-value-4d81748f-c086-4f07-98f7-97554ba076bf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3267cede-fa29-462a-9910-067099462540=testing-taint-value-4ca88e2f-8d85-41a8-86b0-c5f4ade49ced:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-cff309cd-7b8c-47f5-9a59-7b95ccfd69fb=testing-taint-value-93c30bcc-e9f2-4f36-abe1-1b94fa0720c4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7622d59a-13bf-4c62-b463-e6661a116f6a=testing-taint-value-f95669b9-d18f-4f36-9c4d-c493d27ad5ef:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d99e3921-18f8-4ab0-9ea8-a54f97d2c8c3=testing-taint-value-c3c5ae86-4e8b-4cc1-938a-efff8b539757:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-31c97bfe-ead2-426e-8a35-9250f352c97d=testing-taint-value-3a2fcb8e-c159-4db0-927b-bbcd0fc711c1:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-31c97bfe-ead2-426e-8a35-9250f352c97d=testing-taint-value-3a2fcb8e-c159-4db0-927b-bbcd0fc711c1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d99e3921-18f8-4ab0-9ea8-a54f97d2c8c3=testing-taint-value-c3c5ae86-4e8b-4cc1-938a-efff8b539757:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7622d59a-13bf-4c62-b463-e6661a116f6a=testing-taint-value-f95669b9-d18f-4f36-9c4d-c493d27ad5ef:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-cff309cd-7b8c-47f5-9a59-7b95ccfd69fb=testing-taint-value-93c30bcc-e9f2-4f36-abe1-1b94fa0720c4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3267cede-fa29-462a-9910-067099462540=testing-taint-value-4ca88e2f-8d85-41a8-86b0-c5f4ade49ced:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-590b942f-1a46-485b-98e9-86cddd16823e=testing-taint-value-4d81748f-c086-4f07-98f7-97554ba076bf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6df5b44c-dbf4-4142-b932-2ae778eb2d93=testing-taint-value-b04bfbc9-d191-4c68-9cf2-a0dbba3cc0d1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2dcae2c3-ef57-4a1a-94d2-c075d40b75d8=testing-taint-value-279b98b5-218b-454a-aa65-479569c35189:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-93c1bf44-14ed-4293-8ce3-3261b691ab28=testing-taint-value-13f78280-7e05-4724-9e8b-a3282043649c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-bdc16cbe-dd4a-471d-9144-9b9044557a1e=testing-taint-value-dc6d0a5d-d9db-4b62-b140-c5832d3f935f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-786ecb45-ecb5-4937-92b1-4d89f0bde1cf=testing-taint-value-7a3c8933-7cb4-40d3-a3f4-ad4bcef7ce7a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d5e96348-5597-43dc-a3ff-980658e5c675=testing-taint-value-9442617c-6002-42d4-9d4a-55ff363fe3fd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-99fe5cf4-408e-4179-94d4-a85dfe6c74f6=testing-taint-value-f5852bcb-9a5d-49f5-9b61-4c9a76a5b1b7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-88b2a263-e490-41e6-9aa5-7267670e16ec=testing-taint-value-a17b3da1-a4e6-4896-bdea-4ab1f7a95b7d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7173bc69-2265-448e-b4cc-3b5e445e959c=testing-taint-value-0cd2be66-0559-41d4-b5a4-88acfa431176:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-46e0fad8-af3c-4d4c-87e5-460de8affdd5=testing-taint-value-6de071a3-470e-4dec-81bd-0b2472d878d5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8d7025b8-f7de-40df-8a99-2aaa68da967b=testing-taint-value-6ba9f00d-62ec-422e-b1a3-46253d5f337a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-85507015-ff2f-4424-b3bd-7076a1189cdb=testing-taint-value-d7758ccb-7a90-4d0f-aa8a-346163f87cc9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-74f71a89-6d70-4924-ae0a-1862f1787d1d=testing-taint-value-f35eb22d-0a34-41f7-9932-54c61af5aaf0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9d05fde5-b468-4de3-b871-c265d2ef4657=testing-taint-value-cfd7ea40-3741-46a9-bf71-f2b53fd4390d:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:14:09.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9243" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:87.668 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":8,"skipped":2661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:14:09.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:14:09.394: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:14:09.401: INFO: Waiting for terminating namespaces to be deleted... May 22 01:14:09.404: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:14:09.411: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:14:09.411: INFO: Container nodereport ready: true, restart count 0 May 22 01:14:09.411: INFO: Container reconcile ready: true, restart count 0 May 22 01:14:09.411: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:14:09.411: INFO: Container discover ready: false, restart count 0 May 22 01:14:09.411: INFO: Container init ready: false, restart count 0 May 22 01:14:09.411: INFO: Container install ready: false, restart count 0 May 22 01:14:09.411: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:14:09.411: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:14:09.411: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container kube-multus ready: true, restart count 1 May 22 01:14:09.411: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:14:09.411: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:14:09.411: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:14:09.411: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:14:09.411: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:14:09.411: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:14:09.411: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:14:09.411: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:14:09.411: INFO: Container collectd ready: true, restart count 0 May 22 01:14:09.411: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:14:09.411: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:14:09.411: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:14:09.411: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:14:09.411: INFO: Container node-exporter ready: true, restart count 0 May 22 01:14:09.411: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:14:09.411: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:14:09.411: INFO: Container grafana ready: true, restart count 0 May 22 01:14:09.411: INFO: Container prometheus ready: true, restart count 1 May 22 01:14:09.411: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:14:09.411: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:14:09.411: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:14:09.411: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:14:09.411: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:14:09.411: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:14:09.411: INFO: Container tas-controller ready: true, restart count 0 May 22 01:14:09.412: INFO: Container tas-extender ready: true, restart count 0 May 22 01:14:09.412: INFO: with-tolerations from sched-priority-9243 started at 2021-05-22 01:13:52 +0000 UTC (1 container statuses recorded) May 22 01:14:09.412: INFO: Container with-tolerations ready: true, restart count 0 May 22 01:14:09.412: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:14:09.420: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:14:09.420: INFO: Container nodereport ready: true, restart count 0 May 22 01:14:09.420: INFO: Container reconcile ready: true, restart count 0 May 22 01:14:09.420: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:14:09.420: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:14:09.420: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:14:09.420: INFO: Container kube-multus ready: true, restart count 1 May 22 01:14:09.420: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:14:09.420: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:14:09.420: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:14:09.420: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:14:09.420: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:14:09.420: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:14:09.420: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:14:09.420: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:14:09.420: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:14:09.420: INFO: Container collectd ready: true, restart count 0 May 22 01:14:09.420: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:14:09.420: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:14:09.420: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:14:09.420: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:14:09.420: INFO: Container node-exporter ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0fca3263-dc4f-4078-8da0-d17a1d8765c8 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0fca3263-dc4f-4078-8da0-d17a1d8765c8 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0fca3263-dc4f-4078-8da0-d17a1d8765c8 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:14:17.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2787" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.127 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":9,"skipped":2983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:14:17.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:14:17.519: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:14:17.527: INFO: Waiting for terminating namespaces to be deleted... May 22 01:14:17.529: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:14:17.539: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:14:17.539: INFO: Container nodereport ready: true, restart count 0 May 22 01:14:17.539: INFO: Container reconcile ready: true, restart count 0 May 22 01:14:17.539: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:14:17.539: INFO: Container discover ready: false, restart count 0 May 22 01:14:17.539: INFO: Container init ready: false, restart count 0 May 22 01:14:17.539: INFO: Container install ready: false, restart count 0 May 22 01:14:17.539: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:14:17.539: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:14:17.539: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container kube-multus ready: true, restart count 1 May 22 01:14:17.539: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:14:17.539: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:14:17.539: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:14:17.539: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:14:17.539: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:14:17.539: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:14:17.539: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:14:17.539: INFO: Container collectd ready: true, restart count 0 May 22 01:14:17.539: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:14:17.539: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:14:17.539: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:14:17.539: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:14:17.539: INFO: Container node-exporter ready: true, restart count 0 May 22 01:14:17.539: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:14:17.539: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:14:17.539: INFO: Container grafana ready: true, restart count 0 May 22 01:14:17.539: INFO: Container prometheus ready: true, restart count 1 May 22 01:14:17.539: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:14:17.539: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:14:17.539: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:14:17.539: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:14:17.539: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:14:17.539: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:14:17.539: INFO: Container tas-controller ready: true, restart count 0 May 22 01:14:17.539: INFO: Container tas-extender ready: true, restart count 0 May 22 01:14:17.539: INFO: with-tolerations from sched-priority-9243 started at 2021-05-22 01:13:52 +0000 UTC (1 container statuses recorded) May 22 01:14:17.539: INFO: Container with-tolerations ready: false, restart count 0 May 22 01:14:17.539: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:14:17.547: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:14:17.548: INFO: Container nodereport ready: true, restart count 0 May 22 01:14:17.548: INFO: Container reconcile ready: true, restart count 0 May 22 01:14:17.548: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:14:17.548: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container kube-multus ready: true, restart count 1 May 22 01:14:17.548: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:14:17.548: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:14:17.548: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:14:17.548: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:14:17.548: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:14:17.548: INFO: Container collectd ready: true, restart count 0 May 22 01:14:17.548: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:14:17.548: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:14:17.548: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:14:17.548: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:14:17.548: INFO: Container node-exporter ready: true, restart count 0 May 22 01:14:17.548: INFO: with-labels from sched-pred-2787 started at 2021-05-22 01:14:13 +0000 UTC (1 container statuses recorded) May 22 01:14:17.548: INFO: Container with-labels ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4534699e-2ebd-4e60-815c-e8d7870e252b=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-5c297aa7-1854-465f-8271-7cb866513570 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-5c297aa7-1854-465f-8271-7cb866513570 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-5c297aa7-1854-465f-8271-7cb866513570 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4534699e-2ebd-4e60-815c-e8d7870e252b=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:14:25.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1085" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.151 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":10,"skipped":3035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:14:25.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 22 01:14:25.686: INFO: Waiting up to 1m0s for all nodes to be ready May 22 01:15:25.734: INFO: Waiting for terminating namespaces to be deleted... May 22 01:15:25.736: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 22 01:15:25.759: INFO: The status of Pod cmk-init-discover-node1-48g7j is Succeeded, skipping waiting May 22 01:15:25.759: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 22 01:15:25.759: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 May 22 01:15:25.759: INFO: ComputeCPUMemFraction for node: node1 May 22 01:15:25.773: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:15:25.773: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:15:25.773: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:15:25.773: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:15:25.773: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:15:25.773: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:15:25.773: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:15:25.773: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:15:25.773: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:15:25.773: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:15:25.773: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:15:25.773: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:15:25.773: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:15:25.773: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:15:25.773: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:15:25.773: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:15:25.773: INFO: Node: node1, totalRequestedCPUResource: 1237, cpuAllocatableMil: 77000, cpuFraction: 0.016064935064935063 May 22 01:15:25.773: INFO: Node: node1, totalRequestedMemResource: 2089379840, memAllocatableVal: 178884628480, memFraction: 0.011680041252027425 May 22 01:15:25.773: INFO: ComputeCPUMemFraction for node: node2 May 22 01:15:25.787: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:15:25.787: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:15:25.787: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:15:25.787: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:15:25.787: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:15:25.787: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:15:25.787: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:15:25.787: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:15:25.787: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:15:25.787: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 22 01:15:25.787: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 22 01:15:25.802: INFO: Waiting for running... May 22 01:15:30.865: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:15:35.921: INFO: ComputeCPUMemFraction for node: node1 May 22 01:15:35.938: INFO: Pod for on the node: cmk-h8jxp, Cpu: 200, Mem: 419430400 May 22 01:15:35.938: INFO: Pod for on the node: cmk-init-discover-node1-48g7j, Cpu: 300, Mem: 629145600 May 22 01:15:35.938: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-8pz6w, Cpu: 100, Mem: 209715200 May 22 01:15:35.938: INFO: Pod for on the node: kube-flannel-k6mr4, Cpu: 150, Mem: 64000000 May 22 01:15:35.938: INFO: Pod for on the node: kube-multus-ds-amd64-wlmhr, Cpu: 100, Mem: 94371840 May 22 01:15:35.938: INFO: Pod for on the node: kube-proxy-h5k9s, Cpu: 100, Mem: 209715200 May 22 01:15:35.938: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-8rsws, Cpu: 50, Mem: 64000000 May 22 01:15:35.938: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-nnrtl, Cpu: 100, Mem: 209715200 May 22 01:15:35.938: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 22 01:15:35.938: INFO: Pod for on the node: node-feature-discovery-worker-lh5hz, Cpu: 100, Mem: 209715200 May 22 01:15:35.938: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm, Cpu: 100, Mem: 209715200 May 22 01:15:35.938: INFO: Pod for on the node: collectd-mc5kl, Cpu: 300, Mem: 629145600 May 22 01:15:35.938: INFO: Pod for on the node: node-exporter-l5k2r, Cpu: 112, Mem: 209715200 May 22 01:15:35.938: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 22 01:15:35.938: INFO: Pod for on the node: prometheus-operator-5bb8cb9d8f-mzlrf, Cpu: 200, Mem: 314572800 May 22 01:15:35.939: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k, Cpu: 200, Mem: 419430400 May 22 01:15:35.939: INFO: Pod for on the node: f736a869-feec-4cd8-afb6-7f0a024de861-0, Cpu: 37263, Mem: 87352934400 May 22 01:15:35.939: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 22 01:15:35.939: INFO: Node: node1, totalRequestedMemResource: 89442314240, memAllocatableVal: 178884628480, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 22 01:15:35.939: INFO: ComputeCPUMemFraction for node: node2 May 22 01:15:35.954: INFO: Pod for on the node: cmk-xtrv9, Cpu: 200, Mem: 419430400 May 22 01:15:35.954: INFO: Pod for on the node: kube-flannel-5p7gq, Cpu: 150, Mem: 64000000 May 22 01:15:35.954: INFO: Pod for on the node: kube-multus-ds-amd64-6q46t, Cpu: 100, Mem: 94371840 May 22 01:15:35.954: INFO: Pod for on the node: kube-proxy-q57hf, Cpu: 100, Mem: 209715200 May 22 01:15:35.954: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 22 01:15:35.954: INFO: Pod for on the node: node-feature-discovery-worker-z827f, Cpu: 100, Mem: 209715200 May 22 01:15:35.954: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k, Cpu: 100, Mem: 209715200 May 22 01:15:35.954: INFO: Pod for on the node: collectd-rkmjk, Cpu: 300, Mem: 629145600 May 22 01:15:35.954: INFO: Pod for on the node: node-exporter-jctsz, Cpu: 112, Mem: 209715200 May 22 01:15:35.954: INFO: Pod for on the node: da3b9fee-8d5a-411e-abe3-338b866edeaa-0, Cpu: 38013, Mem: 88937371648 May 22 01:15:35.954: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 22 01:15:35.954: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1766 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1766, will wait for the garbage collector to delete the pods May 22 01:15:42.135: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.368372ms May 22 01:15:42.835: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.467148ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:15:59.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1766" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:93.996 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":11,"skipped":4096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 22 01:15:59.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 22 01:15:59.695: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 22 01:15:59.702: INFO: Waiting for terminating namespaces to be deleted... May 22 01:15:59.704: INFO: Logging pods the apiserver thinks is on node node1 before test May 22 01:15:59.714: INFO: cmk-h8jxp from kube-system started at 2021-05-21 20:07:00 +0000 UTC (2 container statuses recorded) May 22 01:15:59.714: INFO: Container nodereport ready: true, restart count 0 May 22 01:15:59.714: INFO: Container reconcile ready: true, restart count 0 May 22 01:15:59.714: INFO: cmk-init-discover-node1-48g7j from kube-system started at 2021-05-21 20:06:17 +0000 UTC (3 container statuses recorded) May 22 01:15:59.714: INFO: Container discover ready: false, restart count 0 May 22 01:15:59.714: INFO: Container init ready: false, restart count 0 May 22 01:15:59.714: INFO: Container install ready: false, restart count 0 May 22 01:15:59.714: INFO: cmk-webhook-6c9d5f8578-8pz6w from kube-system started at 2021-05-21 20:07:00 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container cmk-webhook ready: true, restart count 0 May 22 01:15:59.714: INFO: kube-flannel-k6mr4 from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container kube-flannel ready: true, restart count 1 May 22 01:15:59.714: INFO: kube-multus-ds-amd64-wlmhr from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container kube-multus ready: true, restart count 1 May 22 01:15:59.714: INFO: kube-proxy-h5k9s from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container kube-proxy ready: true, restart count 1 May 22 01:15:59.714: INFO: kubernetes-dashboard-86c6f9df5b-8rsws from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 22 01:15:59.714: INFO: kubernetes-metrics-scraper-678c97765c-nnrtl from kube-system started at 2021-05-21 19:58:07 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 22 01:15:59.714: INFO: nginx-proxy-node1 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container nginx-proxy ready: true, restart count 1 May 22 01:15:59.714: INFO: node-feature-discovery-worker-lh5hz from kube-system started at 2021-05-21 20:03:47 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:15:59.714: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-czhnm from kube-system started at 2021-05-21 20:04:29 +0000 UTC (1 container statuses recorded) May 22 01:15:59.714: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:15:59.714: INFO: collectd-mc5kl from monitoring started at 2021-05-21 20:13:40 +0000 UTC (3 container statuses recorded) May 22 01:15:59.714: INFO: Container collectd ready: true, restart count 0 May 22 01:15:59.714: INFO: Container collectd-exporter ready: true, restart count 0 May 22 01:15:59.714: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:15:59.714: INFO: node-exporter-l5k2r from monitoring started at 2021-05-21 20:07:54 +0000 UTC (2 container statuses recorded) May 22 01:15:59.714: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:15:59.714: INFO: Container node-exporter ready: true, restart count 0 May 22 01:15:59.714: INFO: prometheus-k8s-0 from monitoring started at 2021-05-21 20:08:06 +0000 UTC (5 container statuses recorded) May 22 01:15:59.714: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 22 01:15:59.714: INFO: Container grafana ready: true, restart count 0 May 22 01:15:59.714: INFO: Container prometheus ready: true, restart count 1 May 22 01:15:59.714: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 22 01:15:59.714: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 22 01:15:59.714: INFO: prometheus-operator-5bb8cb9d8f-mzlrf from monitoring started at 2021-05-21 20:07:47 +0000 UTC (2 container statuses recorded) May 22 01:15:59.714: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:15:59.714: INFO: Container prometheus-operator ready: true, restart count 0 May 22 01:15:59.714: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-r8d7k from monitoring started at 2021-05-22 00:30:47 +0000 UTC (2 container statuses recorded) May 22 01:15:59.714: INFO: Container tas-controller ready: true, restart count 0 May 22 01:15:59.714: INFO: Container tas-extender ready: true, restart count 0 May 22 01:15:59.714: INFO: Logging pods the apiserver thinks is on node node2 before test May 22 01:15:59.722: INFO: cmk-xtrv9 from kube-system started at 2021-05-22 00:30:51 +0000 UTC (2 container statuses recorded) May 22 01:15:59.723: INFO: Container nodereport ready: true, restart count 0 May 22 01:15:59.723: INFO: Container reconcile ready: true, restart count 0 May 22 01:15:59.723: INFO: kube-flannel-5p7gq from kube-system started at 2021-05-21 19:57:34 +0000 UTC (1 container statuses recorded) May 22 01:15:59.723: INFO: Container kube-flannel ready: true, restart count 2 May 22 01:15:59.723: INFO: kube-multus-ds-amd64-6q46t from kube-system started at 2021-05-21 19:57:42 +0000 UTC (1 container statuses recorded) May 22 01:15:59.723: INFO: Container kube-multus ready: true, restart count 1 May 22 01:15:59.723: INFO: kube-proxy-q57hf from kube-system started at 2021-05-21 19:57:00 +0000 UTC (1 container statuses recorded) May 22 01:15:59.723: INFO: Container kube-proxy ready: true, restart count 2 May 22 01:15:59.723: INFO: nginx-proxy-node2 from kube-system started at 2021-05-21 20:03:00 +0000 UTC (1 container statuses recorded) May 22 01:15:59.723: INFO: Container nginx-proxy ready: true, restart count 2 May 22 01:15:59.723: INFO: node-feature-discovery-worker-z827f from kube-system started at 2021-05-22 00:30:50 +0000 UTC (1 container statuses recorded) May 22 01:15:59.723: INFO: Container nfd-worker ready: true, restart count 0 May 22 01:15:59.723: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-jkg9k from kube-system started at 2021-05-22 00:30:58 +0000 UTC (1 container statuses recorded) May 22 01:15:59.723: INFO: Container kube-sriovdp ready: true, restart count 0 May 22 01:15:59.723: INFO: collectd-rkmjk from monitoring started at 2021-05-22 00:31:19 +0000 UTC (3 container statuses recorded) May 22 01:15:59.723: INFO: Container collectd ready: true, restart count 0 May 22 01:15:59.723: INFO: Container collectd-exporter ready: false, restart count 0 May 22 01:15:59.723: INFO: Container rbac-proxy ready: true, restart count 0 May 22 01:15:59.723: INFO: node-exporter-jctsz from monitoring started at 2021-05-22 00:30:49 +0000 UTC (2 container statuses recorded) May 22 01:15:59.723: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 22 01:15:59.723: INFO: Container node-exporter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e9817cf432a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e981830ac03], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e99aeaccc33], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6920/filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e9a008899c5], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.73/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e9a01475529], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e9a1cab2140], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 459.518413ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e9a22deddbc], Reason = [Created], Message = [Created container filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95.16813e9a291834ae], Reason = [Started], Message = [Started container filler-pod-8d2a24aa-9e57-4358-9017-834f1f550f95] STEP: Considering event: Type = [Normal], Name = [without-label.16813e9726ff94e3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6920/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16813e978b6f46d3], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.71/24]] STEP: Considering event: Type = [Normal], Name = [without-label.16813e978c4e1bd5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-label.16813e97a82df3b4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 467.646896ms] STEP: Considering event: Type = [Normal], Name = [without-label.16813e97aed18b3e], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16813e97b4878e07], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16813e98172d724f], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16813e983187b97f], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Normal], Name = [without-label.16813e98aab64222], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 459.059418ms] STEP: Considering event: Type = [Warning], Name = [without-label.16813e98aab85c62], Reason = [Failed], Message = [Error: cannot find volume "default-token-6zcfx" to mount into container "without-label"] STEP: Considering event: Type = [Warning], Name = [additional-pod31045217-2ae8-4f14-8940-b17d0dfc8ff0.16813e9a6d032b0c], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [additional-pod31045217-2ae8-4f14-8940-b17d0dfc8ff0.16813e9a6d518e65], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 22 01:16:14.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6920" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:15.159 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":12,"skipped":5174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 22 01:16:14.839: INFO: Running AfterSuite actions on all nodes May 22 01:16:14.839: INFO: Running AfterSuite actions on node 1 May 22 01:16:14.839: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 541.291 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 9m2.470796119s Test Suite Passed