I0507 00:01:06.689512 23 e2e.go:129] Starting e2e run "89d42d4f-36e7-4613-bc2c-304d3ed8fc20" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651881665 - Will randomize all specs Will run 13 of 5773 specs May 7 00:01:06.724: INFO: >>> kubeConfig: /root/.kube/config May 7 00:01:06.729: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 7 00:01:06.758: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 00:01:06.819: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 7 00:01:06.819: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 7 00:01:06.819: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 00:01:06.819: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 00:01:06.819: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 7 00:01:06.836: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 7 00:01:06.836: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 7 00:01:06.836: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 7 00:01:06.836: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 7 00:01:06.836: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 7 00:01:06.836: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 7 00:01:06.836: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 7 00:01:06.836: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 7 00:01:06.836: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 7 00:01:06.836: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 7 00:01:06.836: INFO: e2e test version: v1.21.9 May 7 00:01:06.837: INFO: kube-apiserver version: v1.21.1 May 7 00:01:06.837: INFO: >>> kubeConfig: /root/.kube/config May 7 00:01:06.844: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:01:06.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0507 00:01:06.890719 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 7 00:01:06.890: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 7 00:01:06.895: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:01:06.897: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:01:06.906: INFO: Waiting for terminating namespaces to be deleted... May 7 00:01:06.908: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:01:06.923: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:01:06.924: INFO: Container discover ready: false, restart count 0 May 7 00:01:06.924: INFO: Container init ready: false, restart count 0 May 7 00:01:06.924: INFO: Container install ready: false, restart count 0 May 7 00:01:06.924: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:01:06.924: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:06.924: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:06.924: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:06.924: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:01:06.924: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:06.924: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:06.924: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:06.924: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:06.924: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:06.924: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:06.924: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:06.924: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:06.924: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:06.924: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:06.924: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:06.924: INFO: Container collectd ready: true, restart count 0 May 7 00:01:06.924: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:06.924: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:06.924: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:06.924: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:06.924: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:06.924: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:01:06.924: INFO: Container config-reloader ready: true, restart count 0 May 7 00:01:06.924: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:01:06.924: INFO: Container grafana ready: true, restart count 0 May 7 00:01:06.924: INFO: Container prometheus ready: true, restart count 1 May 7 00:01:06.924: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:01:06.924: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:06.924: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:01:06.924: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:01:06.933: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:01:06.933: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:06.933: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:06.933: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:01:06.933: INFO: Container discover ready: false, restart count 0 May 7 00:01:06.933: INFO: Container init ready: false, restart count 0 May 7 00:01:06.933: INFO: Container install ready: false, restart count 0 May 7 00:01:06.933: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:01:06.933: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:01:06.933: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:06.933: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:01:06.934: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:06.934: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:06.934: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:01:06.934: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:01:06.934: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:06.934: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:06.934: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:06.934: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:06.934: INFO: Container collectd ready: true, restart count 0 May 7 00:01:06.934: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:06.934: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:06.934: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:06.934: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:06.934: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:06.934: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:01:06.934: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1d36cbd0-ea8b-4093-8c16-2faa381ad674=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-a1158989-4cdb-48ba-accb-665f52fbe1be testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-a1158989-4cdb-48ba-accb-665f52fbe1be off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-a1158989-4cdb-48ba-accb-665f52fbe1be STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1d36cbd0-ea8b-4093-8c16-2faa381ad674=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:01:17.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5321" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.190 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":1,"skipped":1433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:01:17.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:01:17.076: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:01:17.085: INFO: Waiting for terminating namespaces to be deleted... May 7 00:01:17.087: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:01:17.097: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:01:17.097: INFO: Container discover ready: false, restart count 0 May 7 00:01:17.097: INFO: Container init ready: false, restart count 0 May 7 00:01:17.097: INFO: Container install ready: false, restart count 0 May 7 00:01:17.097: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:01:17.097: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:17.097: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:17.097: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:17.097: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:01:17.097: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:17.097: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:17.097: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:17.097: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:17.097: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:17.097: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:17.097: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:17.097: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:17.097: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:17.097: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:17.097: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:17.097: INFO: Container collectd ready: true, restart count 0 May 7 00:01:17.097: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:17.097: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:17.097: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:17.097: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:17.097: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:17.097: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:01:17.097: INFO: Container config-reloader ready: true, restart count 0 May 7 00:01:17.097: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:01:17.097: INFO: Container grafana ready: true, restart count 0 May 7 00:01:17.097: INFO: Container prometheus ready: true, restart count 1 May 7 00:01:17.097: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:01:17.097: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:17.097: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:01:17.097: INFO: with-tolerations from sched-pred-5321 started at 2022-05-07 00:01:13 +0000 UTC (1 container statuses recorded) May 7 00:01:17.098: INFO: Container with-tolerations ready: true, restart count 0 May 7 00:01:17.098: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:01:17.105: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:01:17.105: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:17.105: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:17.105: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:01:17.105: INFO: Container discover ready: false, restart count 0 May 7 00:01:17.105: INFO: Container init ready: false, restart count 0 May 7 00:01:17.105: INFO: Container install ready: false, restart count 0 May 7 00:01:17.105: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:01:17.105: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:01:17.105: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:17.105: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:17.105: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:01:17.105: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:01:17.105: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:17.105: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:17.105: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:17.105: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:17.105: INFO: Container collectd ready: true, restart count 0 May 7 00:01:17.105: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:17.105: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:17.105: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:17.105: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:17.105: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:17.105: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:01:17.105: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b51872ce-438f-4698-94e1-4dd3bed34b19=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-a7dcd052-e458-4d6e-a2d1-212b44188c1d testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a21f77434e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7317/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a2755c7041], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a289742b95], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 337.092775ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a290deadc3], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a297f4e142], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a30f0a080d], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16eca9a31166261f], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-b51872ce-438f-4698-94e1-4dd3bed34b19: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16eca9a31166261f], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-b51872ce-438f-4698-94e1-4dd3bed34b19: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a21f77434e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7317/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a2755c7041], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a289742b95], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 337.092775ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a290deadc3], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a297f4e142], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eca9a30f0a080d], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b51872ce-438f-4698-94e1-4dd3bed34b19=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16eca9a367855e39], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7317/still-no-tolerations to node1] STEP: removing the label kubernetes.io/e2e-label-key-a7dcd052-e458-4d6e-a2d1-212b44188c1d off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-a7dcd052-e458-4d6e-a2d1-212b44188c1d STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b51872ce-438f-4698-94e1-4dd3bed34b19=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:01:23.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7317" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":2,"skipped":1503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:01:23.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:01:23.270: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:01:23.279: INFO: Waiting for terminating namespaces to be deleted... May 7 00:01:23.281: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:01:23.292: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:01:23.292: INFO: Container discover ready: false, restart count 0 May 7 00:01:23.292: INFO: Container init ready: false, restart count 0 May 7 00:01:23.292: INFO: Container install ready: false, restart count 0 May 7 00:01:23.292: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:01:23.292: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:23.292: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:23.292: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:01:23.292: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:23.292: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:23.292: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:23.292: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:23.292: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:23.292: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:23.292: INFO: Container collectd ready: true, restart count 0 May 7 00:01:23.292: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:23.292: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:23.292: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:23.292: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:23.292: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:23.292: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:01:23.292: INFO: Container config-reloader ready: true, restart count 0 May 7 00:01:23.292: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:01:23.292: INFO: Container grafana ready: true, restart count 0 May 7 00:01:23.292: INFO: Container prometheus ready: true, restart count 1 May 7 00:01:23.292: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:01:23.292: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:23.292: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:01:23.292: INFO: with-tolerations from sched-pred-5321 started at 2022-05-07 00:01:13 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container with-tolerations ready: true, restart count 0 May 7 00:01:23.292: INFO: still-no-tolerations from sched-pred-7317 started at 2022-05-07 00:01:22 +0000 UTC (1 container statuses recorded) May 7 00:01:23.292: INFO: Container still-no-tolerations ready: false, restart count 0 May 7 00:01:23.292: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:01:23.303: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:01:23.303: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:23.303: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:23.303: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:01:23.303: INFO: Container discover ready: false, restart count 0 May 7 00:01:23.303: INFO: Container init ready: false, restart count 0 May 7 00:01:23.303: INFO: Container install ready: false, restart count 0 May 7 00:01:23.303: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:01:23.303: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:01:23.303: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:23.303: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:23.303: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:01:23.303: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:01:23.303: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:23.303: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:23.303: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:23.303: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:23.303: INFO: Container collectd ready: true, restart count 0 May 7 00:01:23.303: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:23.303: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:23.303: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:23.303: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:23.303: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:23.303: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:01:23.303: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:01:37.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2537" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.181 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":3,"skipped":2229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:01:37.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:01:37.459: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:01:37.467: INFO: Waiting for terminating namespaces to be deleted... May 7 00:01:37.469: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:01:37.477: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:01:37.478: INFO: Container discover ready: false, restart count 0 May 7 00:01:37.478: INFO: Container init ready: false, restart count 0 May 7 00:01:37.478: INFO: Container install ready: false, restart count 0 May 7 00:01:37.478: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:01:37.478: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:37.478: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:37.478: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:01:37.478: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:37.478: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:37.478: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:37.478: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:37.478: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:37.478: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:37.478: INFO: Container collectd ready: true, restart count 0 May 7 00:01:37.478: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:37.478: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:37.478: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:37.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:37.478: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:37.478: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:01:37.478: INFO: Container config-reloader ready: true, restart count 0 May 7 00:01:37.478: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:01:37.478: INFO: Container grafana ready: true, restart count 0 May 7 00:01:37.478: INFO: Container prometheus ready: true, restart count 1 May 7 00:01:37.478: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:01:37.478: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:37.478: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:01:37.478: INFO: rs-e2e-pts-filter-4rpm2 from sched-pred-2537 started at 2022-05-07 00:01:31 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container e2e-pts-filter ready: true, restart count 0 May 7 00:01:37.478: INFO: rs-e2e-pts-filter-5f76z from sched-pred-2537 started at 2022-05-07 00:01:31 +0000 UTC (1 container statuses recorded) May 7 00:01:37.478: INFO: Container e2e-pts-filter ready: true, restart count 0 May 7 00:01:37.478: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:01:37.488: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:01:37.488: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:37.488: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:37.488: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:01:37.488: INFO: Container discover ready: false, restart count 0 May 7 00:01:37.488: INFO: Container init ready: false, restart count 0 May 7 00:01:37.488: INFO: Container install ready: false, restart count 0 May 7 00:01:37.488: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:01:37.488: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:01:37.488: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:37.488: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:01:37.488: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:37.489: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:37.489: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:01:37.489: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:01:37.489: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:37.489: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:37.489: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:37.489: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:37.489: INFO: Container collectd ready: true, restart count 0 May 7 00:01:37.489: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:37.489: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:37.489: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:37.489: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:37.489: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:37.489: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container tas-extender ready: true, restart count 0 May 7 00:01:37.489: INFO: rs-e2e-pts-filter-cft5g from sched-pred-2537 started at 2022-05-07 00:01:31 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container e2e-pts-filter ready: true, restart count 0 May 7 00:01:37.489: INFO: rs-e2e-pts-filter-hd94n from sched-pred-2537 started at 2022-05-07 00:01:31 +0000 UTC (1 container statuses recorded) May 7 00:01:37.489: INFO: Container e2e-pts-filter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a7d05def52], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a81034a42b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a8b8773d72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4833/filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a912b18fc3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a9254f411b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 312.31638ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a92cf9da13], Reason = [Created], Message = [Created container filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f] STEP: Considering event: Type = [Normal], Name = [filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f.16eca9a9342d338d], Reason = [Started], Message = [Started container filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f] STEP: Considering event: Type = [Normal], Name = [without-label.16eca9a6dfb1916e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4833/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16eca9a739988a91], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16eca9a74bd28585], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 305.783377ms] STEP: Considering event: Type = [Normal], Name = [without-label.16eca9a752cafcea], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16eca9a759d04dee], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16eca9a7cfedb156], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod475291f4-8fa4-4d5b-ac20-c960ddbc1c63.16eca9a9aebe0dc1], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:01:50.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4833" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":4,"skipped":2563,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:01:50.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:01:50.640: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:01:50.648: INFO: Waiting for terminating namespaces to be deleted... May 7 00:01:50.651: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:01:50.658: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:01:50.658: INFO: Container discover ready: false, restart count 0 May 7 00:01:50.658: INFO: Container init ready: false, restart count 0 May 7 00:01:50.658: INFO: Container install ready: false, restart count 0 May 7 00:01:50.658: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:01:50.658: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:50.658: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:50.658: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:50.658: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:01:50.658: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:50.658: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:50.658: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:50.659: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:50.659: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:50.659: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:50.659: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:50.659: INFO: Container collectd ready: true, restart count 0 May 7 00:01:50.659: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:50.659: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:50.659: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:50.659: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:50.659: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:50.659: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:01:50.659: INFO: Container config-reloader ready: true, restart count 0 May 7 00:01:50.659: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:01:50.659: INFO: Container grafana ready: true, restart count 0 May 7 00:01:50.659: INFO: Container prometheus ready: true, restart count 1 May 7 00:01:50.659: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:01:50.659: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:50.659: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:01:50.659: INFO: rs-e2e-pts-filter-4rpm2 from sched-pred-2537 started at 2022-05-07 00:01:31 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container e2e-pts-filter ready: false, restart count 0 May 7 00:01:50.659: INFO: rs-e2e-pts-filter-5f76z from sched-pred-2537 started at 2022-05-07 00:01:31 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container e2e-pts-filter ready: false, restart count 0 May 7 00:01:50.659: INFO: filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f from sched-pred-4833 started at 2022-05-07 00:01:45 +0000 UTC (1 container statuses recorded) May 7 00:01:50.659: INFO: Container filler-pod-f18719df-60e6-4ad2-b445-610d30d9197f ready: true, restart count 0 May 7 00:01:50.659: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:01:50.669: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:01:50.669: INFO: Container nodereport ready: true, restart count 0 May 7 00:01:50.669: INFO: Container reconcile ready: true, restart count 0 May 7 00:01:50.669: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:01:50.669: INFO: Container discover ready: false, restart count 0 May 7 00:01:50.669: INFO: Container init ready: false, restart count 0 May 7 00:01:50.669: INFO: Container install ready: false, restart count 0 May 7 00:01:50.669: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:01:50.669: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:01:50.669: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container kube-multus ready: true, restart count 1 May 7 00:01:50.669: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:01:50.669: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:01:50.669: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:01:50.669: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:01:50.669: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:01:50.669: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:01:50.669: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:01:50.669: INFO: Container collectd ready: true, restart count 0 May 7 00:01:50.669: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:01:50.669: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:01:50.669: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:01:50.669: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:01:50.669: INFO: Container node-exporter ready: true, restart count 0 May 7 00:01:50.669: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:01:50.669: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16eca9ab5b33b9fe], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:01:57.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5947" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":5,"skipped":2563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:01:57.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 7 00:01:57.818: INFO: Waiting up to 1m0s for all nodes to be ready May 7 00:02:57.878: INFO: Waiting for terminating namespaces to be deleted... May 7 00:02:57.881: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 00:02:57.902: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 7 00:02:57.902: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 7 00:02:57.902: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 00:02:57.902: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 00:02:57.917: INFO: ComputeCPUMemFraction for node: node1 May 7 00:02:57.918: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:02:57.918: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:02:57.918: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:02:57.918: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:02:57.918: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:02:57.918: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:02:57.918: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:02:57.918: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:02:57.918: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:02:57.918: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:02:57.918: INFO: ComputeCPUMemFraction for node: node2 May 7 00:02:57.918: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:02:57.918: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:02:57.918: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:02:57.918: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:02:57.918: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:02:57.918: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:02:57.918: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:02:57.918: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:02:57.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:02:57.918: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:02:57.918: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 May 7 00:02:57.935: INFO: ComputeCPUMemFraction for node: node1 May 7 00:02:57.935: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:02:57.935: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:02:57.935: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:02:57.935: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:02:57.935: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:02:57.935: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:02:57.935: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:02:57.935: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:02:57.935: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:02:57.935: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:02:57.935: INFO: ComputeCPUMemFraction for node: node2 May 7 00:02:57.935: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:02:57.935: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:02:57.935: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:02:57.935: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:02:57.935: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:02:57.935: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:02:57.935: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:02:57.935: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:02:57.935: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:02:57.935: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:02:57.935: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 May 7 00:02:57.951: INFO: Waiting for running... May 7 00:02:57.952: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:03:03.024: INFO: ComputeCPUMemFraction for node: node1 May 7 00:03:03.024: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:03:03.024: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:03:03.024: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:03:03.024: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:03:03.025: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:03:03.025: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:03:03.025: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:03:03.025: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:03:03.025: INFO: Pod for on the node: 89ead450-24ee-426c-9d5c-ac9fafc9c440-0, Cpu: 37413, Mem: 87429507072 May 7 00:03:03.025: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 7 00:03:03.025: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:03:03.025: INFO: ComputeCPUMemFraction for node: node2 May 7 00:03:03.025: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:03:03.025: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:03:03.025: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:03:03.025: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:03:03.025: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:03:03.025: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:03:03.025: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:03:03.025: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:03:03.025: INFO: Pod for on the node: b346d656-ffe3-442a-8591-d7a61eaab9f1-0, Cpu: 37963, Mem: 88885940224 May 7 00:03:03.025: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 7 00:03:03.025: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-653cdf17-deee-4351-8b5f=testing-taint-value-f2c8a8a5-810b-487c-a99c-b3fcf5037e45:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3b1413e1-7522-4db7-8982=testing-taint-value-5b4e5955-b9e0-48e7-b101-9a431d29d475:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-19a45e4a-b914-422a-8024=testing-taint-value-6309581d-4876-4bb2-ae38-1540b23c11d2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d99da7a6-710c-40fe-93cc=testing-taint-value-50dbc33c-4ddd-495a-9b70-e9e63a6d9a71:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9517c80c-a264-46e9-8b34=testing-taint-value-85ac2855-ee50-4e56-a0d7-4cdc2455bcd8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-627ed971-7773-4f40-ae85=testing-taint-value-d35a4519-6a48-4a68-9268-559517bfb695:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9501a278-6956-419a-8f38=testing-taint-value-3ed12f77-c205-4318-81d9-1343f66a1259:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a9e488c0-1c38-4f60-9adf=testing-taint-value-cf5e2707-ffa1-408a-99a5-dc0034f4eca6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-70112cad-f101-4302-b9f0=testing-taint-value-0519e61e-24c5-4da6-9596-a3298986b87d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8fcd0cac-0310-4d2f-aae7=testing-taint-value-0d2d6786-61bd-4cb4-9114-f402cbe8eac7:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-502bc656-f147-43cf-b80c=testing-taint-value-31d520c8-f772-47be-8285-accc60452db9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ce93eeb3-cbe0-4944-a0f3=testing-taint-value-63c9005e-bd4f-429b-9498-b6fe4ad87169:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-eab15a10-73e2-498a-96e7=testing-taint-value-5021e77c-064e-452a-9b63-540af1cdc0e1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-babea711-5859-49eb-be46=testing-taint-value-a3d80441-43c0-49a4-96df-2ed4fb45456b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e8d46651-c451-4344-81c8=testing-taint-value-5cb1aefb-86ed-409a-bef7-f96b25a75cb6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-86163593-7efc-4385-acd5=testing-taint-value-fd324afc-e10c-4fed-a3f3-92ec90899813:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c596faac-c09e-4397-aab7=testing-taint-value-ee50b2a7-2291-4534-995a-4be718ea8c86:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-99cebf12-58a2-4dbc-ae6a=testing-taint-value-9e9d46b7-4646-4376-a6b2-f03242f77a7a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ae5ed182-912e-4432-9a5c=testing-taint-value-cf77cc39-b29f-4775-94ea-7da1b7663abe:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4f1dc18e-68c1-43a7-8d32=testing-taint-value-df9efd81-c626-4265-8e1c-a0f59aa84f40:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-502bc656-f147-43cf-b80c=testing-taint-value-31d520c8-f772-47be-8285-accc60452db9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ce93eeb3-cbe0-4944-a0f3=testing-taint-value-63c9005e-bd4f-429b-9498-b6fe4ad87169:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-eab15a10-73e2-498a-96e7=testing-taint-value-5021e77c-064e-452a-9b63-540af1cdc0e1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-babea711-5859-49eb-be46=testing-taint-value-a3d80441-43c0-49a4-96df-2ed4fb45456b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e8d46651-c451-4344-81c8=testing-taint-value-5cb1aefb-86ed-409a-bef7-f96b25a75cb6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-86163593-7efc-4385-acd5=testing-taint-value-fd324afc-e10c-4fed-a3f3-92ec90899813:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c596faac-c09e-4397-aab7=testing-taint-value-ee50b2a7-2291-4534-995a-4be718ea8c86:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-99cebf12-58a2-4dbc-ae6a=testing-taint-value-9e9d46b7-4646-4376-a6b2-f03242f77a7a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ae5ed182-912e-4432-9a5c=testing-taint-value-cf77cc39-b29f-4775-94ea-7da1b7663abe:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4f1dc18e-68c1-43a7-8d32=testing-taint-value-df9efd81-c626-4265-8e1c-a0f59aa84f40:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-653cdf17-deee-4351-8b5f=testing-taint-value-f2c8a8a5-810b-487c-a99c-b3fcf5037e45:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3b1413e1-7522-4db7-8982=testing-taint-value-5b4e5955-b9e0-48e7-b101-9a431d29d475:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-19a45e4a-b914-422a-8024=testing-taint-value-6309581d-4876-4bb2-ae38-1540b23c11d2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d99da7a6-710c-40fe-93cc=testing-taint-value-50dbc33c-4ddd-495a-9b70-e9e63a6d9a71:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9517c80c-a264-46e9-8b34=testing-taint-value-85ac2855-ee50-4e56-a0d7-4cdc2455bcd8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-627ed971-7773-4f40-ae85=testing-taint-value-d35a4519-6a48-4a68-9268-559517bfb695:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9501a278-6956-419a-8f38=testing-taint-value-3ed12f77-c205-4318-81d9-1343f66a1259:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a9e488c0-1c38-4f60-9adf=testing-taint-value-cf5e2707-ffa1-408a-99a5-dc0034f4eca6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-70112cad-f101-4302-b9f0=testing-taint-value-0519e61e-24c5-4da6-9596-a3298986b87d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8fcd0cac-0310-4d2f-aae7=testing-taint-value-0d2d6786-61bd-4cb4-9114-f402cbe8eac7:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:03:18.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2905" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:80.583 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":6,"skipped":2686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:03:18.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 7 00:03:18.411: INFO: Waiting up to 1m0s for all nodes to be ready May 7 00:04:18.466: INFO: Waiting for terminating namespaces to be deleted... May 7 00:04:18.470: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 00:04:18.490: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 7 00:04:18.490: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 7 00:04:18.490: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 00:04:18.490: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 00:04:18.507: INFO: ComputeCPUMemFraction for node: node1 May 7 00:04:18.507: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:04:18.507: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:04:18.507: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:04:18.507: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:04:18.507: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:04:18.507: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:04:18.507: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:04:18.507: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:04:18.507: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:04:18.507: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:04:18.507: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:04:18.507: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:04:18.507: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:04:18.507: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:04:18.507: INFO: ComputeCPUMemFraction for node: node2 May 7 00:04:18.508: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:04:18.508: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:04:18.508: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:04:18.508: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:04:18.508: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:04:18.508: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:04:18.508: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:04:18.508: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:04:18.508: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:04:18.508: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:04:18.508: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:04:18.508: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:04:18.508: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:04:18.508: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:04:18.508: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:04:18.508: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 May 7 00:04:18.531: INFO: ComputeCPUMemFraction for node: node1 May 7 00:04:18.531: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:04:18.531: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:04:18.531: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:04:18.531: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:04:18.531: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:04:18.531: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:04:18.531: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:04:18.531: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:04:18.531: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:04:18.531: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:04:18.531: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:04:18.531: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:04:18.531: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:04:18.532: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:04:18.532: INFO: ComputeCPUMemFraction for node: node2 May 7 00:04:18.532: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:04:18.532: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:04:18.532: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:04:18.532: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:04:18.532: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:04:18.532: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:04:18.532: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:04:18.532: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:04:18.532: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:04:18.532: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:04:18.532: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:04:18.532: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:04:18.532: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:04:18.532: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:04:18.532: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:04:18.532: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 May 7 00:04:18.547: INFO: Waiting for running... May 7 00:04:18.548: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:04:23.615: INFO: ComputeCPUMemFraction for node: node1 May 7 00:04:23.615: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:04:23.615: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:04:23.615: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:04:23.615: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:04:23.615: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:04:23.615: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:04:23.615: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:04:23.615: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:04:23.615: INFO: Pod for on the node: d6dee9d3-2221-4bee-b775-2603cc02c12f-0, Cpu: 37413, Mem: 87429507072 May 7 00:04:23.615: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 7 00:04:23.615: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:04:23.615: INFO: ComputeCPUMemFraction for node: node2 May 7 00:04:23.615: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:04:23.615: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:04:23.615: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:04:23.615: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:04:23.615: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:04:23.615: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:04:23.615: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:04:23.615: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:04:23.615: INFO: Pod for on the node: 1ffa8aa6-ad5e-4072-bdf9-1496080a0127-0, Cpu: 37963, Mem: 88885940224 May 7 00:04:23.615: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 7 00:04:23.615: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-957 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-957, will wait for the garbage collector to delete the pods May 7 00:04:29.792: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.575993ms May 7 00:04:29.892: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.360684ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:04:40.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-957" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:82.439 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":7,"skipped":3088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:04:40.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:04:40.867: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:04:40.875: INFO: Waiting for terminating namespaces to be deleted... May 7 00:04:40.877: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:04:40.888: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:04:40.888: INFO: Container discover ready: false, restart count 0 May 7 00:04:40.888: INFO: Container init ready: false, restart count 0 May 7 00:04:40.888: INFO: Container install ready: false, restart count 0 May 7 00:04:40.888: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:04:40.888: INFO: Container nodereport ready: true, restart count 0 May 7 00:04:40.888: INFO: Container reconcile ready: true, restart count 0 May 7 00:04:40.888: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:04:40.888: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:04:40.888: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:04:40.888: INFO: Container kube-multus ready: true, restart count 1 May 7 00:04:40.888: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:04:40.888: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:04:40.888: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:04:40.888: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:04:40.888: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:04:40.888: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:04:40.888: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:04:40.888: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:04:40.888: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:04:40.888: INFO: Container collectd ready: true, restart count 0 May 7 00:04:40.888: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:04:40.888: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:04:40.888: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:04:40.888: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:04:40.888: INFO: Container node-exporter ready: true, restart count 0 May 7 00:04:40.888: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:04:40.888: INFO: Container config-reloader ready: true, restart count 0 May 7 00:04:40.888: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:04:40.888: INFO: Container grafana ready: true, restart count 0 May 7 00:04:40.888: INFO: Container prometheus ready: true, restart count 1 May 7 00:04:40.888: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:04:40.888: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:04:40.888: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:04:40.888: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:04:40.895: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:04:40.895: INFO: Container nodereport ready: true, restart count 0 May 7 00:04:40.895: INFO: Container reconcile ready: true, restart count 0 May 7 00:04:40.895: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:04:40.895: INFO: Container discover ready: false, restart count 0 May 7 00:04:40.895: INFO: Container init ready: false, restart count 0 May 7 00:04:40.895: INFO: Container install ready: false, restart count 0 May 7 00:04:40.895: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:04:40.895: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:04:40.895: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container kube-multus ready: true, restart count 1 May 7 00:04:40.895: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:04:40.895: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:04:40.895: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:04:40.895: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:04:40.895: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:04:40.895: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:04:40.895: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:04:40.895: INFO: Container collectd ready: true, restart count 0 May 7 00:04:40.895: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:04:40.895: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:04:40.895: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:04:40.895: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:04:40.895: INFO: Container node-exporter ready: true, restart count 0 May 7 00:04:40.895: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:04:40.895: INFO: Container tas-extender ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6fa35a62-78ad-4f14-8feb-bf075594e026 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6fa35a62-78ad-4f14-8feb-bf075594e026 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6fa35a62-78ad-4f14-8feb-bf075594e026 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:04:48.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8446" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.145 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":8,"skipped":4121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:04:48.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:04:49.011: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:04:49.020: INFO: Waiting for terminating namespaces to be deleted... May 7 00:04:49.022: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:04:49.034: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:04:49.034: INFO: Container discover ready: false, restart count 0 May 7 00:04:49.034: INFO: Container init ready: false, restart count 0 May 7 00:04:49.034: INFO: Container install ready: false, restart count 0 May 7 00:04:49.034: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:04:49.034: INFO: Container nodereport ready: true, restart count 0 May 7 00:04:49.034: INFO: Container reconcile ready: true, restart count 0 May 7 00:04:49.034: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:04:49.034: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:04:49.034: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:04:49.034: INFO: Container kube-multus ready: true, restart count 1 May 7 00:04:49.034: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:04:49.035: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:04:49.035: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:04:49.035: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:04:49.035: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:04:49.035: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:04:49.035: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:04:49.035: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:04:49.035: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:04:49.035: INFO: Container collectd ready: true, restart count 0 May 7 00:04:49.035: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:04:49.035: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:04:49.035: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:04:49.035: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:04:49.035: INFO: Container node-exporter ready: true, restart count 0 May 7 00:04:49.035: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:04:49.035: INFO: Container config-reloader ready: true, restart count 0 May 7 00:04:49.035: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:04:49.035: INFO: Container grafana ready: true, restart count 0 May 7 00:04:49.035: INFO: Container prometheus ready: true, restart count 1 May 7 00:04:49.035: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:04:49.035: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:04:49.035: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:04:49.035: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:04:49.053: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:04:49.053: INFO: Container nodereport ready: true, restart count 0 May 7 00:04:49.053: INFO: Container reconcile ready: true, restart count 0 May 7 00:04:49.053: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:04:49.053: INFO: Container discover ready: false, restart count 0 May 7 00:04:49.053: INFO: Container init ready: false, restart count 0 May 7 00:04:49.053: INFO: Container install ready: false, restart count 0 May 7 00:04:49.053: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:04:49.053: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:04:49.053: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:04:49.053: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:04:49.054: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container kube-multus ready: true, restart count 1 May 7 00:04:49.054: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:04:49.054: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:04:49.054: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:04:49.054: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:04:49.054: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:04:49.054: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:04:49.054: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:04:49.054: INFO: Container collectd ready: true, restart count 0 May 7 00:04:49.054: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:04:49.054: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:04:49.054: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:04:49.054: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:04:49.054: INFO: Container node-exporter ready: true, restart count 0 May 7 00:04:49.054: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container tas-extender ready: true, restart count 0 May 7 00:04:49.054: INFO: with-labels from sched-pred-8446 started at 2022-05-07 00:04:44 +0000 UTC (1 container statuses recorded) May 7 00:04:49.054: INFO: Container with-labels ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0fffae47-9c7f-433f-a36e-819cb18b6cc2 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-0fffae47-9c7f-433f-a36e-819cb18b6cc2 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0fffae47-9c7f-433f-a36e-819cb18b6cc2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:05:05.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2715" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.248 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":9,"skipped":4299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:05:05.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 7 00:05:05.262: INFO: Waiting up to 1m0s for all nodes to be ready May 7 00:06:05.315: INFO: Waiting for terminating namespaces to be deleted... May 7 00:06:05.317: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 00:06:05.334: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 7 00:06:05.334: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 7 00:06:05.334: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 00:06:05.334: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 00:06:05.350: INFO: ComputeCPUMemFraction for node: node1 May 7 00:06:05.350: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:06:05.350: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:06:05.350: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:06:05.350: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:06:05.350: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:06:05.350: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:06:05.350: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:06:05.350: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:06:05.350: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:06:05.350: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:06:05.350: INFO: ComputeCPUMemFraction for node: node2 May 7 00:06:05.350: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:06:05.350: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:06:05.350: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:06:05.350: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:06:05.350: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:06:05.350: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:06:05.350: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:06:05.350: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:06:05.350: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:06:05.351: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:06:05.351: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:06:05.351: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 7 00:06:09.397: INFO: ComputeCPUMemFraction for node: node1 May 7 00:06:09.397: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:06:09.397: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:06:09.397: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:06:09.397: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:06:09.397: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:06:09.397: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:06:09.397: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:06:09.397: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:06:09.397: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:06:09.397: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:06:09.397: INFO: ComputeCPUMemFraction for node: node2 May 7 00:06:09.397: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:06:09.397: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:06:09.397: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:06:09.397: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:06:09.397: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:06:09.397: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:06:09.397: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:06:09.397: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:06:09.398: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:06:09.398: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:06:09.398: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:06:09.398: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:06:09.398: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:06:09.398: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 May 7 00:06:09.410: INFO: Waiting for running... May 7 00:06:09.413: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:06:14.491: INFO: ComputeCPUMemFraction for node: node1 May 7 00:06:14.491: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:06:14.491: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:06:14.491: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:06:14.491: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:06:14.491: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:06:14.491: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:06:14.491: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:06:14.491: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:06:14.491: INFO: Pod for on the node: 18733a4c-b53e-4665-865a-e4d4ccd23417-0, Cpu: 45113, Mem: 105317967872 May 7 00:06:14.491: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 7 00:06:14.491: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:06:14.491: INFO: ComputeCPUMemFraction for node: node2 May 7 00:06:14.491: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:06:14.491: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:06:14.491: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:06:14.491: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:06:14.491: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:06:14.491: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:06:14.491: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:06:14.491: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:06:14.492: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:06:14.492: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:06:14.492: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:06:14.492: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:06:14.492: INFO: Pod for on the node: 9b22f319-b3da-4f52-86c2-1640a9d18aa1-0, Cpu: 45663, Mem: 106774400614 May 7 00:06:14.492: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 7 00:06:14.492: INFO: Node: node2, totalRequestedMemResource: 107343345254, memAllocatableVal: 178884603904, memFraction: 0.6000703409422913 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:06:24.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8665" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:79.313 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":10,"skipped":4392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:06:24.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 7 00:06:24.576: INFO: Waiting up to 1m0s for all nodes to be ready May 7 00:07:24.634: INFO: Waiting for terminating namespaces to be deleted... May 7 00:07:24.636: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 7 00:07:24.656: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 7 00:07:24.656: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 7 00:07:24.656: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 7 00:07:24.656: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 7 00:07:24.674: INFO: ComputeCPUMemFraction for node: node1 May 7 00:07:24.674: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:07:24.674: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:07:24.674: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:07:24.674: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:07:24.674: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:07:24.674: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:07:24.674: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:07:24.674: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:07:24.674: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:07:24.674: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:07:24.675: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:07:24.675: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:07:24.675: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:07:24.675: INFO: ComputeCPUMemFraction for node: node2 May 7 00:07:24.675: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:07:24.675: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:07:24.675: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:07:24.675: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:07:24.675: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:07:24.675: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:07:24.675: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:07:24.675: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:07:24.675: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:07:24.675: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:07:24.675: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 May 7 00:07:32.778: INFO: ComputeCPUMemFraction for node: node2 May 7 00:07:32.778: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:07:32.778: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:07:32.778: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:07:32.778: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:07:32.778: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:07:32.778: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:07:32.778: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:07:32.778: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 7 00:07:32.778: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 May 7 00:07:32.778: INFO: ComputeCPUMemFraction for node: node1 May 7 00:07:32.778: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:07:32.778: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:07:32.778: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:07:32.778: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:07:32.778: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:07:32.778: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:07:32.778: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:07:32.778: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:07:32.778: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:07:32.778: INFO: Node: node1, totalRequestedCPUResource: 1087, cpuAllocatableMil: 77000, cpuFraction: 0.014116883116883116 May 7 00:07:32.778: INFO: Node: node1, totalRequestedMemResource: 2025379840, memAllocatableVal: 178884608000, memFraction: 0.011322270052435144 May 7 00:07:32.789: INFO: Waiting for running... May 7 00:07:32.793: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:07:37.861: INFO: ComputeCPUMemFraction for node: node2 May 7 00:07:37.861: INFO: Pod for on the node: cmk-cb5rv, Cpu: 200, Mem: 419430400 May 7 00:07:37.861: INFO: Pod for on the node: cmk-init-discover-node2-kt2nj, Cpu: 300, Mem: 629145600 May 7 00:07:37.861: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-vllpr, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: kube-flannel-ffwfn, Cpu: 150, Mem: 64000000 May 7 00:07:37.861: INFO: Pod for on the node: kube-multus-ds-amd64-gtzj9, Cpu: 100, Mem: 94371840 May 7 00:07:37.861: INFO: Pod for on the node: kube-proxy-g77fj, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-29wg6, Cpu: 50, Mem: 64000000 May 7 00:07:37.861: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-4ztpz, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 7 00:07:37.861: INFO: Pod for on the node: node-feature-discovery-worker-8phhs, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: collectd-mbz88, Cpu: 300, Mem: 629145600 May 7 00:07:37.861: INFO: Pod for on the node: node-exporter-4xqmj, Cpu: 112, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: 13c60354-b122-4eff-b291-165c32745f35-0, Cpu: 37963, Mem: 88885940224 May 7 00:07:37.861: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 7 00:07:37.861: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 7 00:07:37.861: INFO: ComputeCPUMemFraction for node: node1 May 7 00:07:37.861: INFO: Pod for on the node: cmk-init-discover-node1-tp69t, Cpu: 300, Mem: 629145600 May 7 00:07:37.861: INFO: Pod for on the node: cmk-trkp8, Cpu: 200, Mem: 419430400 May 7 00:07:37.861: INFO: Pod for on the node: kube-flannel-ph67x, Cpu: 150, Mem: 64000000 May 7 00:07:37.861: INFO: Pod for on the node: kube-multus-ds-amd64-2mv45, Cpu: 100, Mem: 94371840 May 7 00:07:37.861: INFO: Pod for on the node: kube-proxy-xc75d, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 7 00:07:37.861: INFO: Pod for on the node: node-feature-discovery-worker-fbf8d, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29, Cpu: 100, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: collectd-wq9cz, Cpu: 300, Mem: 629145600 May 7 00:07:37.861: INFO: Pod for on the node: node-exporter-hqs4s, Cpu: 112, Mem: 209715200 May 7 00:07:37.861: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 7 00:07:37.861: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrrfv, Cpu: 200, Mem: 314572800 May 7 00:07:37.861: INFO: Pod for on the node: 1fe8bae0-c615-472b-8eb5-068125405c7c-0, Cpu: 37413, Mem: 87429507072 May 7 00:07:37.861: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 7 00:07:37.861: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:07:57.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7069" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:93.398 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":11,"skipped":4469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:07:57.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 7 00:07:57.989: INFO: Waiting up to 1m0s for all nodes to be ready May 7 00:08:58.043: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:09:34.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4192" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.406 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":12,"skipped":4995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 7 00:09:34.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 7 00:09:34.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:09:34.400: INFO: Waiting for terminating namespaces to be deleted... May 7 00:09:34.402: INFO: Logging pods the apiserver thinks is on node node1 before test May 7 00:09:34.412: INFO: cmk-init-discover-node1-tp69t from kube-system started at 2022-05-06 20:21:33 +0000 UTC (3 container statuses recorded) May 7 00:09:34.412: INFO: Container discover ready: false, restart count 0 May 7 00:09:34.412: INFO: Container init ready: false, restart count 0 May 7 00:09:34.412: INFO: Container install ready: false, restart count 0 May 7 00:09:34.412: INFO: cmk-trkp8 from kube-system started at 2022-05-06 20:22:16 +0000 UTC (2 container statuses recorded) May 7 00:09:34.412: INFO: Container nodereport ready: true, restart count 0 May 7 00:09:34.412: INFO: Container reconcile ready: true, restart count 0 May 7 00:09:34.412: INFO: kube-flannel-ph67x from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container kube-flannel ready: true, restart count 3 May 7 00:09:34.412: INFO: kube-multus-ds-amd64-2mv45 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container kube-multus ready: true, restart count 1 May 7 00:09:34.412: INFO: kube-proxy-xc75d from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:09:34.412: INFO: nginx-proxy-node1 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:09:34.412: INFO: node-feature-discovery-worker-fbf8d from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:09:34.412: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:09:34.412: INFO: collectd-wq9cz from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:09:34.412: INFO: Container collectd ready: true, restart count 0 May 7 00:09:34.412: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:09:34.412: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:09:34.412: INFO: node-exporter-hqs4s from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:09:34.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:09:34.412: INFO: Container node-exporter ready: true, restart count 0 May 7 00:09:34.412: INFO: prometheus-k8s-0 from monitoring started at 2022-05-06 20:23:29 +0000 UTC (4 container statuses recorded) May 7 00:09:34.412: INFO: Container config-reloader ready: true, restart count 0 May 7 00:09:34.412: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 7 00:09:34.412: INFO: Container grafana ready: true, restart count 0 May 7 00:09:34.412: INFO: Container prometheus ready: true, restart count 1 May 7 00:09:34.412: INFO: prometheus-operator-585ccfb458-vrrfv from monitoring started at 2022-05-06 20:23:12 +0000 UTC (2 container statuses recorded) May 7 00:09:34.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:09:34.412: INFO: Container prometheus-operator ready: true, restart count 0 May 7 00:09:34.412: INFO: high from sched-preemption-4192 started at 2022-05-07 00:09:09 +0000 UTC (1 container statuses recorded) May 7 00:09:34.412: INFO: Container high ready: true, restart count 0 May 7 00:09:34.412: INFO: Logging pods the apiserver thinks is on node node2 before test May 7 00:09:34.419: INFO: cmk-cb5rv from kube-system started at 2022-05-06 20:22:17 +0000 UTC (2 container statuses recorded) May 7 00:09:34.419: INFO: Container nodereport ready: true, restart count 0 May 7 00:09:34.419: INFO: Container reconcile ready: true, restart count 0 May 7 00:09:34.419: INFO: cmk-init-discover-node2-kt2nj from kube-system started at 2022-05-06 20:21:53 +0000 UTC (3 container statuses recorded) May 7 00:09:34.419: INFO: Container discover ready: false, restart count 0 May 7 00:09:34.419: INFO: Container init ready: false, restart count 0 May 7 00:09:34.419: INFO: Container install ready: false, restart count 0 May 7 00:09:34.419: INFO: cmk-webhook-6c9d5f8578-vllpr from kube-system started at 2022-05-06 20:22:17 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container cmk-webhook ready: true, restart count 0 May 7 00:09:34.419: INFO: kube-flannel-ffwfn from kube-system started at 2022-05-06 20:10:16 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container kube-flannel ready: true, restart count 2 May 7 00:09:34.419: INFO: kube-multus-ds-amd64-gtzj9 from kube-system started at 2022-05-06 20:10:25 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container kube-multus ready: true, restart count 1 May 7 00:09:34.419: INFO: kube-proxy-g77fj from kube-system started at 2022-05-06 20:09:20 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container kube-proxy ready: true, restart count 2 May 7 00:09:34.419: INFO: kubernetes-dashboard-785dcbb76d-29wg6 from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 7 00:09:34.419: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz from kube-system started at 2022-05-06 20:10:56 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 7 00:09:34.419: INFO: nginx-proxy-node2 from kube-system started at 2022-05-06 20:09:17 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container nginx-proxy ready: true, restart count 2 May 7 00:09:34.419: INFO: node-feature-discovery-worker-8phhs from kube-system started at 2022-05-06 20:17:54 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container nfd-worker ready: true, restart count 0 May 7 00:09:34.419: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h from kube-system started at 2022-05-06 20:19:12 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container kube-sriovdp ready: true, restart count 0 May 7 00:09:34.419: INFO: collectd-mbz88 from monitoring started at 2022-05-06 20:27:12 +0000 UTC (3 container statuses recorded) May 7 00:09:34.419: INFO: Container collectd ready: true, restart count 0 May 7 00:09:34.419: INFO: Container collectd-exporter ready: true, restart count 0 May 7 00:09:34.419: INFO: Container rbac-proxy ready: true, restart count 0 May 7 00:09:34.419: INFO: node-exporter-4xqmj from monitoring started at 2022-05-06 20:23:20 +0000 UTC (2 container statuses recorded) May 7 00:09:34.419: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 7 00:09:34.419: INFO: Container node-exporter ready: true, restart count 0 May 7 00:09:34.419: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 from monitoring started at 2022-05-06 20:26:21 +0000 UTC (1 container statuses recorded) May 7 00:09:34.419: INFO: Container tas-extender ready: true, restart count 0 May 7 00:09:34.419: INFO: low-1 from sched-preemption-4192 started at 2022-05-07 00:09:15 +0000 UTC (1 container statuses recorded) May 7 00:09:34.420: INFO: Container low-1 ready: true, restart count 0 May 7 00:09:34.420: INFO: medium from sched-preemption-4192 started at 2022-05-07 00:09:29 +0000 UTC (1 container statuses recorded) May 7 00:09:34.420: INFO: Container medium ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 7 00:09:34.459: INFO: Pod cmk-cb5rv requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod cmk-trkp8 requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod cmk-webhook-6c9d5f8578-vllpr requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod kube-flannel-ffwfn requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod kube-flannel-ph67x requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod kube-multus-ds-amd64-2mv45 requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod kube-multus-ds-amd64-gtzj9 requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod kube-proxy-g77fj requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod kube-proxy-xc75d requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod kubernetes-dashboard-785dcbb76d-29wg6 requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod kubernetes-metrics-scraper-5558854cb-4ztpz requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod node-feature-discovery-worker-8phhs requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod node-feature-discovery-worker-fbf8d requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod collectd-mbz88 requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod collectd-wq9cz requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod node-exporter-4xqmj requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod node-exporter-hqs4s requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod prometheus-operator-585ccfb458-vrrfv requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod high requesting local ephemeral resource =0 on Node node1 May 7 00:09:34.459: INFO: Pod low-1 requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Pod medium requesting local ephemeral resource =0 on Node node2 May 7 00:09:34.459: INFO: Using pod capacity: 40608090249 May 7 00:09:34.459: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 May 7 00:09:34.459: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 7 00:09:34.652: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ecaa15ec193cb1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ecaa17792a4021], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ecaa17a04a6a4d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 656.410248ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ecaa17c8f7a41e], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16ecaa181df08c8a], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ecaa15ec9b56cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ecaa16b5eb7261], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ecaa16c951e96f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 325.474458ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ecaa16f506facf], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16ecaa171f06336b], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ecaa15f1a8c22d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ecaa17fa177792], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ecaa180d9cb729], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 327.491768ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ecaa1831dae1dd], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16ecaa18782bab03], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ecaa15f2497814], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ecaa187836871d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ecaa18a5c39ea3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 764.195658ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ecaa18ac5f406f], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16ecaa18b39d158d], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ecaa15f2d862a8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ecaa18498f704d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ecaa187167be96], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 668.480884ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ecaa18787f8ace], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16ecaa18855f4c22], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ecaa15f36d8a53], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ecaa17be975e93], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ecaa17e1dbde4e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 591.683246ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ecaa182162363e], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16ecaa1851601383], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ecaa15f405711e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ecaa182f4f0b25], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ecaa18442abb7c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 349.933301ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ecaa18769e2591], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16ecaa188d6e752d], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ecaa15f48dd96b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-15 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ecaa1875606908], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ecaa189305ecee], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 497.380313ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ecaa189a16d61b], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16ecaa18a146c976], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ecaa15f53439cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ecaa18495d0348], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ecaa185fb7efb7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 375.050452ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ecaa186757027d], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16ecaa186e6fd566], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ecaa15f5c3bb96], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ecaa181dbd3493], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ecaa183d6d2d7d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 531.618902ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ecaa18533c5341], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16ecaa185e9e73b2], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ecaa15f64959a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ecaa18191a90d4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ecaa182b836e03], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 308.854804ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ecaa184fb9d8ee], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16ecaa185e0fc649], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ecaa15f6e7bf1f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ecaa1850f3167c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ecaa1889a22632], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 950.989952ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ecaa18914044dc], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16ecaa18994019f6], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ecaa15ed40bf0d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ecaa17da7339b6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ecaa17ec717890], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 301.866796ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ecaa1805748de4], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16ecaa1851248aaa], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ecaa15edcd0928], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ecaa17271986f0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ecaa1739f77074], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 316.525099ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ecaa1757925c57], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16ecaa17e1e69b3c], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ecaa15ee560663], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ecaa18262d8716], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ecaa184f32d161], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 688.204598ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ecaa18593be8da], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16ecaa1861331fad], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ecaa15eed4880d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-5 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ecaa1676b28a5d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ecaa16875ed4c0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 279.717849ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ecaa16abf66de9], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16ecaa17119153f1], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ecaa15ef5fd05f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ecaa16750320e0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ecaa168e4fd1c9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 424.446915ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ecaa16aa49605c], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16ecaa17a88ebc68], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ecaa15eff417ee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ecaa179756c0e5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ecaa17b2393b1f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 451.044637ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ecaa17cf51e6dd], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16ecaa181df1b468], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ecaa15f08fdb21], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ecaa1778773949], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ecaa178eb99f61], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 373.440874ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ecaa17a9c97b3c], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16ecaa17d70872b9], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ecaa15f1056cf9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1380/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ecaa1855fbb3dd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ecaa1881957ec2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 731.493002ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ecaa188c298375], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16ecaa1892b33487], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16ecaa197946260c], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 7 00:09:50.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1380" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.386 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":13,"skipped":5269,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 7 00:09:50.760: INFO: Running AfterSuite actions on all nodes May 7 00:09:50.760: INFO: Running AfterSuite actions on node 1 May 7 00:09:50.760: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 524.040 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 8m45.38477753s Test Suite Passed