I0617 23:51:20.866497 25 e2e.go:129] Starting e2e run "1da3eeea-4f82-4628-b23e-faf2f6aa5863" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1655509879 - Will randomize all specs Will run 13 of 5773 specs Jun 17 23:51:20.882: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:51:20.886: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 17 23:51:20.915: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:51:20.981: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:51:20.981: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:51:20.981: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:51:20.981: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:51:20.981: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 17 23:51:20.998: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 17 23:51:20.998: INFO: e2e test version: v1.21.9 Jun 17 23:51:20.999: INFO: kube-apiserver version: v1.21.1 Jun 17 23:51:20.999: INFO: >>> kubeConfig: /root/.kube/config Jun 17 23:51:21.006: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:51:21.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0617 23:51:21.040700 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 17 23:51:21.040: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 17 23:51:21.044: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:51:21.047: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:51:21.060: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:51:21.063: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:51:21.073: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:51:21.073: INFO: Container discover ready: false, restart count 0 Jun 17 23:51:21.073: INFO: Container init ready: false, restart count 0 Jun 17 23:51:21.073: INFO: Container install ready: false, restart count 0 Jun 17 23:51:21.073: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.073: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:51:21.073: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:51:21.073: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:51:21.073: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:51:21.073: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.073: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:51:21.073: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.073: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:51:21.073: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.073: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:51:21.073: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.074: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:51:21.074: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.074: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:51:21.074: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.074: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:51:21.074: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.074: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:51:21.074: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:51:21.074: INFO: Container collectd ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:51:21.074: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:51:21.074: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:51:21.074: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:51:21.074: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Container grafana ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:51:21.074: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.074: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:51:21.074: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:51:21.080: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:51:21.080: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:51:21.080: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:51:21.080: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:51:21.080: INFO: Container discover ready: false, restart count 0 Jun 17 23:51:21.080: INFO: Container init ready: false, restart count 0 Jun 17 23:51:21.080: INFO: Container install ready: false, restart count 0 Jun 17 23:51:21.080: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:51:21.080: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:51:21.080: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:51:21.080: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:51:21.080: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:51:21.080: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:51:21.080: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:51:21.080: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:51:21.080: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:51:21.080: INFO: Container collectd ready: true, restart count 0 Jun 17 23:51:21.080: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:51:21.080: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:51:21.080: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:51:21.080: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:51:21.080: INFO: Container node-exporter ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1601dad9-9fbb-4df7-9b5e-31583bf0fe19=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-cc49f793-a384-4bc3-b5a1-5b3e8d6dd826 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-cc49f793-a384-4bc3-b5a1-5b3e8d6dd826 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-cc49f793-a384-4bc3-b5a1-5b3e8d6dd826 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1601dad9-9fbb-4df7-9b5e-31583bf0fe19=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:51:29.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1779" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.179 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":1,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:51:29.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:51:29.219: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:51:29.228: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:51:29.230: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:51:29.239: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:51:29.239: INFO: Container discover ready: false, restart count 0 Jun 17 23:51:29.239: INFO: Container init ready: false, restart count 0 Jun 17 23:51:29.239: INFO: Container install ready: false, restart count 0 Jun 17 23:51:29.239: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.239: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:51:29.239: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:51:29.240: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:51:29.240: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:51:29.240: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:51:29.240: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:51:29.240: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:51:29.240: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:51:29.240: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:51:29.240: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:51:29.240: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:51:29.240: INFO: Container collectd ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:51:29.240: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:51:29.240: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:51:29.240: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:51:29.240: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container grafana ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:51:29.240: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.240: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:51:29.240: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:51:29.248: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:51:29.248: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:51:29.248: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:51:29.248: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:51:29.248: INFO: Container discover ready: false, restart count 0 Jun 17 23:51:29.248: INFO: Container init ready: false, restart count 0 Jun 17 23:51:29.248: INFO: Container install ready: false, restart count 0 Jun 17 23:51:29.248: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.248: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:51:29.248: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:51:29.249: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:51:29.249: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:51:29.249: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:51:29.249: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:51:29.249: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:51:29.249: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:51:29.249: INFO: Container collectd ready: true, restart count 0 Jun 17 23:51:29.249: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:51:29.249: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:51:29.249: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:51:29.249: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:51:29.249: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:51:29.249: INFO: with-tolerations from sched-pred-1779 started at 2022-06-17 23:51:25 +0000 UTC (1 container statuses recorded) Jun 17 23:51:29.249: INFO: Container with-tolerations ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7a092a00e0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7a511f078b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7ae6fb1ff3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8871/filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7b3c6ff65f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7b4f053b82], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 311.760733ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7b54f95ec9], Reason = [Created], Message = [Created container filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99] STEP: Considering event: Type = [Normal], Name = [filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99.16f98d7b5b5a6edb], Reason = [Started], Message = [Started container filler-pod-95761e1f-6a01-44de-817d-20fde4b98f99] STEP: Considering event: Type = [Normal], Name = [without-label.16f98d7918b84d17], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8871/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16f98d796c6c4a2c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16f98d798555471a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 417.915242ms] STEP: Considering event: Type = [Normal], Name = [without-label.16f98d798d28840d], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f98d79940f4166], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f98d7a082e03e2], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-podc758dd2e-4f30-4c67-8904-6f7471705eb0.16f98d7be7725b9d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:51:42.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8871" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.171 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":2,"skipped":915,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:51:42.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 17 23:51:42.397: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 23:52:42.453: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:52:42.456: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:52:42.475: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:52:42.475: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:52:42.475: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:52:42.475: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:52:42.490: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:52:42.490: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:52:42.490: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:52:42.490: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:52:42.490: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:52:42.490: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:52:42.490: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:52:42.490: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:52:42.490: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:52:42.490: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:52:42.490: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:52:42.490: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:52:42.490: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:52:42.490: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:52:42.490: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:52:42.490: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:52:42.490: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:52:42.490: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:52:42.490: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:52:42.490: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:52:42.491: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:52:42.491: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Jun 17 23:52:52.607: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:52:52.607: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:52:52.607: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:52:52.607: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:52:52.607: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:52:52.607: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:52:52.607: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:52:52.607: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:52:52.607: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:52:52.607: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 Jun 17 23:52:52.607: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:52:52.607: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:52:52.607: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:52:52.607: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:52:52.607: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:52:52.607: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:52:52.607: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:52:52.607: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:52:52.607: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:52:52.607: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:52:52.607: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:52:52.607: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:52:52.607: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:52:52.619: INFO: Waiting for running... Jun 17 23:52:52.622: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:53:02.690: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:53:02.690: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:53:02.690: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:53:02.690: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:53:02.690: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:53:02.690: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:53:02.690: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:53:02.690: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: fbe13727-33ce-43c0-b833-15228bbadbfb-0, Cpu: 38013, Mem: 88949940224 Jun 17 23:53:02.690: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 17 23:53:02.690: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:53:02.690: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:53:02.690: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:53:02.690: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:53:02.690: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:53:02.690: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:53:02.690: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:53:02.690: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:53:02.690: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:53:02.690: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:53:02.690: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:53:02.690: INFO: Pod for on the node: d108e9bf-e87d-4c7d-a6dc-59dc674c0bc8-0, Cpu: 37563, Mem: 87680079872 Jun 17 23:53:02.690: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 17 23:53:02.690: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:53:20.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2759" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:98.415 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":3,"skipped":1210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:53:20.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 17 23:53:20.816: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 23:54:20.867: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:54:20.870: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:54:20.891: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:54:20.891: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:54:20.891: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:54:20.891: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:54:20.907: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:54:20.907: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.907: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:54:20.907: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:54:20.907: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:54:20.907: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:54:20.907: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:54:20.907: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.907: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:54:20.907: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:54:20.907: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:54:20.907: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:54:20.907: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:54:20.907: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.907: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:54:20.907: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:54:20.907: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:54:20.907: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.907: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.907: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:54:20.907: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:54:20.907: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Jun 17 23:54:20.924: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:54:20.924: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.924: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:54:20.924: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:54:20.924: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:54:20.924: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:54:20.924: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:54:20.924: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.924: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:54:20.924: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:54:20.924: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:54:20.924: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:54:20.924: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:54:20.924: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.924: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:54:20.924: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:54:20.924: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:54:20.924: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:54:20.924: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:54:20.925: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:54:20.925: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:54:20.925: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 Jun 17 23:54:20.940: INFO: Waiting for running... Jun 17 23:54:20.941: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:54:26.010: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:54:26.010: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:54:26.010: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:54:26.010: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:54:26.010: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:54:26.010: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:54:26.010: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:54:26.010: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:54:26.010: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:54:26.010: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: 75970917-f331-4b57-800e-2806872972fc-0, Cpu: 37563, Mem: 87680079872 Jun 17 23:54:26.010: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 17 23:54:26.010: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:54:26.010: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:54:26.010: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:54:26.010: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:54:26.010: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:54:26.010: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:54:26.010: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:54:26.010: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:54:26.010: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:54:26.010: INFO: Pod for on the node: 54bbb6f7-cc66-45de-94e2-77d8001334af-0, Cpu: 38013, Mem: 88949940224 Jun 17 23:54:26.010: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 17 23:54:26.011: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-463a6893-6dde-4bba-92cc=testing-taint-value-6653c63a-fc8d-425c-855c-3e963493993f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f2bb316d-bbbd-40ba-a967=testing-taint-value-9125e5ad-9b04-4587-896a-2f45cf09bef1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-95655d9a-306d-49c9-8ed3=testing-taint-value-8a8040d1-684e-4c8d-b3bb-15a500bcea8e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4131b4d6-1f54-4023-9dd0=testing-taint-value-54e849d2-2536-4d6e-8dd3-c6725efb0da9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2e3709d3-a1f3-4d3f-a015=testing-taint-value-bdb697e4-612f-46d9-8911-95cff440de20:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fd71a713-6f3b-4ec5-a68b=testing-taint-value-eaa30d1c-5c52-4915-a0c8-db26715dd665:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a935d17a-ab7c-49ec-b980=testing-taint-value-c4c406c2-2b03-4f26-9b54-e3fe9c590d9c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e028352f-8477-4546-85e5=testing-taint-value-ad17b3bd-e246-499a-9c7b-dd2603e27143:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b5da2957-062e-4ab4-bb37=testing-taint-value-1bc65106-a60d-49f6-84b6-ca17bd10ae00:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9cc2e580-6525-4428-a7f6=testing-taint-value-74390b31-6413-4816-a50a-fa115eed956e:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8bb187df-1b1e-44f5-b817=testing-taint-value-c28371ff-f1ae-41a0-acf3-e1391d900b87:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c33a054c-3a7d-45fb-8800=testing-taint-value-5f02e71a-7aa2-4438-a562-74d247e07fc9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-197d2073-9b66-4e8c-9994=testing-taint-value-dcbdee96-2baf-412b-8101-bbf71e9783cd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-6bca8247-8277-493c-b1f6=testing-taint-value-a98ac752-dc2c-4c68-aaea-8d7e8cdbf23e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a5a5d458-6740-4baf-a445=testing-taint-value-2ea9c15d-d100-4b78-b9a3-ce297db14a6d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b9d60ca1-a31f-4e71-a1f9=testing-taint-value-1485e9f6-a56d-4fb4-9ee9-532dfa7e0fce:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a0a94eb3-ba95-4242-9bec=testing-taint-value-1cf8af12-3d4f-4d00-afec-be5c054a3734:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7cdef8d7-0748-4eb1-886b=testing-taint-value-dcc9d044-b0c6-421c-ae79-8e867873c959:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-42f28d80-7531-4785-98c5=testing-taint-value-2e54e99f-d050-43c4-ba9e-e787d0bffd6a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-61c1773f-98d8-4c7f-a40e=testing-taint-value-1a5cef2e-a10d-4a63-8c71-773fcf540257:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8bb187df-1b1e-44f5-b817=testing-taint-value-c28371ff-f1ae-41a0-acf3-e1391d900b87:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c33a054c-3a7d-45fb-8800=testing-taint-value-5f02e71a-7aa2-4438-a562-74d247e07fc9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-197d2073-9b66-4e8c-9994=testing-taint-value-dcbdee96-2baf-412b-8101-bbf71e9783cd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-6bca8247-8277-493c-b1f6=testing-taint-value-a98ac752-dc2c-4c68-aaea-8d7e8cdbf23e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a5a5d458-6740-4baf-a445=testing-taint-value-2ea9c15d-d100-4b78-b9a3-ce297db14a6d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b9d60ca1-a31f-4e71-a1f9=testing-taint-value-1485e9f6-a56d-4fb4-9ee9-532dfa7e0fce:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a0a94eb3-ba95-4242-9bec=testing-taint-value-1cf8af12-3d4f-4d00-afec-be5c054a3734:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7cdef8d7-0748-4eb1-886b=testing-taint-value-dcc9d044-b0c6-421c-ae79-8e867873c959:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-42f28d80-7531-4785-98c5=testing-taint-value-2e54e99f-d050-43c4-ba9e-e787d0bffd6a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-61c1773f-98d8-4c7f-a40e=testing-taint-value-1a5cef2e-a10d-4a63-8c71-773fcf540257:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-463a6893-6dde-4bba-92cc=testing-taint-value-6653c63a-fc8d-425c-855c-3e963493993f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f2bb316d-bbbd-40ba-a967=testing-taint-value-9125e5ad-9b04-4587-896a-2f45cf09bef1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-95655d9a-306d-49c9-8ed3=testing-taint-value-8a8040d1-684e-4c8d-b3bb-15a500bcea8e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4131b4d6-1f54-4023-9dd0=testing-taint-value-54e849d2-2536-4d6e-8dd3-c6725efb0da9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2e3709d3-a1f3-4d3f-a015=testing-taint-value-bdb697e4-612f-46d9-8911-95cff440de20:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fd71a713-6f3b-4ec5-a68b=testing-taint-value-eaa30d1c-5c52-4915-a0c8-db26715dd665:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a935d17a-ab7c-49ec-b980=testing-taint-value-c4c406c2-2b03-4f26-9b54-e3fe9c590d9c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e028352f-8477-4546-85e5=testing-taint-value-ad17b3bd-e246-499a-9c7b-dd2603e27143:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b5da2957-062e-4ab4-bb37=testing-taint-value-1bc65106-a60d-49f6-84b6-ca17bd10ae00:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9cc2e580-6525-4428-a7f6=testing-taint-value-74390b31-6413-4816-a50a-fa115eed956e:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:54:39.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6544" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:78.574 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":4,"skipped":1404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:54:39.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:54:39.393: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:54:39.402: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:54:39.405: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:54:39.421: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:54:39.421: INFO: Container discover ready: false, restart count 0 Jun 17 23:54:39.421: INFO: Container init ready: false, restart count 0 Jun 17 23:54:39.421: INFO: Container install ready: false, restart count 0 Jun 17 23:54:39.421: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.421: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:54:39.421: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:54:39.421: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:54:39.421: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:54:39.421: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.421: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:54:39.421: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.421: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:54:39.421: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.421: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:54:39.421: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.421: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:54:39.421: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.421: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:54:39.421: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.422: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:54:39.422: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.422: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:54:39.422: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:54:39.422: INFO: Container collectd ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:54:39.422: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:54:39.422: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:54:39.422: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:54:39.422: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Container grafana ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:54:39.422: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.422: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:54:39.422: INFO: with-tolerations from sched-priority-6544 started at 2022-06-17 23:54:26 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.422: INFO: Container with-tolerations ready: true, restart count 0 Jun 17 23:54:39.422: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:54:39.435: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:54:39.435: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:54:39.435: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:54:39.435: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:54:39.435: INFO: Container discover ready: false, restart count 0 Jun 17 23:54:39.435: INFO: Container init ready: false, restart count 0 Jun 17 23:54:39.435: INFO: Container install ready: false, restart count 0 Jun 17 23:54:39.435: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:54:39.435: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:54:39.435: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:54:39.435: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:54:39.435: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:54:39.435: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:54:39.435: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:54:39.435: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:54:39.435: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:54:39.435: INFO: Container collectd ready: true, restart count 0 Jun 17 23:54:39.436: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:54:39.436: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:54:39.436: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:54:39.436: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:54:39.436: INFO: Container node-exporter ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Jun 17 23:54:39.472: INFO: Pod cmk-5gtjq requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod cmk-webhook-6c9d5f8578-qcmrd requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod cmk-xh247 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod kube-flannel-plbl8 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod kube-flannel-wqcwq requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod kube-multus-ds-amd64-hblk4 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod kube-multus-ds-amd64-m6vf8 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod kube-proxy-pvtj6 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod kube-proxy-t4lqk requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod kubernetes-dashboard-785dcbb76d-26kg6 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod kubernetes-metrics-scraper-5558854cb-w4nk8 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod node-feature-discovery-worker-82r46 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod node-feature-discovery-worker-dgp4b requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod collectd-5src2 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod collectd-6bcqz requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod node-exporter-8ftgl requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod node-exporter-xgz6d requesting local ephemeral resource =0 on Node node2 Jun 17 23:54:39.472: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-tbvjv requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node node1 Jun 17 23:54:39.472: INFO: Using pod capacity: 40608090249 Jun 17 23:54:39.472: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 Jun 17 23:54:39.472: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Jun 17 23:54:39.656: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f98da562308f84], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f98da5de7153a6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f98da5f8794f05], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 436.722509ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f98da6116c7c63], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f98da669aa8b65], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f98da562974577], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f98da646521668], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f98da669d6c516], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 595.890343ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f98da68e6ae2f7], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f98da6b5de9d52], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f98da5673c1aae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f98da7762bbcc0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f98da7b796afb8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.097523143s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f98da7bdf2a507], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f98da7c49d444d], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f98da567c65da8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f98da773e1644b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f98da785a2c3d0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 297.877732ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f98da78ca7c9a3], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f98da79475cb88], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f98da56852c2a9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f98da775a7cf8c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f98da7961a63ee], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 544.373847ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f98da79d3a3c0e], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f98da7a38deccc], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f98da568dfc1f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f98da7956cfebc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f98da7ee55b9a9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.491640417s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f98da7f8560e60], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f98da7ff7e11f6], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f98da5696b95bb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f98da794d85c39], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f98da7c9752a16], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 882.685659ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f98da7ee4f0fe9], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f98da7f8165e0f], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f98da569f1cffa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f98da7ded3e377], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f98da7ff009f41], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 539.797614ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f98da805caee6d], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f98da80d492f6d], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f98da56a7220da], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f98da7951e388e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f98da7d99bcf71], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.149076297s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f98da7f10bba82], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f98da7fabc84e8], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f98da56afd9dae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f98da7e34ca011], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f98da81098713d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 759.937929ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f98da818a4106f], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f98da81f76326d], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f98da56bad64fb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f98da701725d04], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f98da715360950], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 331.583271ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f98da746ce0396], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f98da7997687e4], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f98da56c4d876b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f98da78e846f04], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f98da7a0e67966], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 308.406921ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f98da7c915e8fb], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f98da7eba860ce], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f98da5630fcf17], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f98da78e846f2e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f98da7b77713a6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 686.975461ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f98da7e65626c9], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f98da7fa4baf00], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f98da563a3a319], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f98da5f201dfb8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f98da608c42083], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 381.822008ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f98da633a578ea], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f98da6799c61b7], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f98da564292630], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f98da63cd80e60], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f98da65bd465e1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 519.846427ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f98da67b1f3197], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f98da702d0ae72], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f98da564b447e9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f98da64c252a2f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f98da683980cd7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 930.267092ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f98da69c18b6c9], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f98da6cc9074b8], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f98da56535c466], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f98da6b1b7be77], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f98da6cc85057b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 449.652404ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f98da6dbf68b11], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f98da6fb4a724b], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f98da565c175f6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f98da6b54f4d93], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f98da6de844382], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 691.326951ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f98da6fcff50b2], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f98da737ce1ef8], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f98da56639c3d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f98da776284d9a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f98da7a762e016], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 825.916665ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f98da7ae5bbdd9], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f98da7b6b05fbe], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f98da566bcd4ea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8634/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f98da6fff13a05], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f98da70f1fc8c9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 254.701501ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f98da738d6ce57], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f98da776f7bc41], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f98da8ee6e6ff0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:54:55.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8634" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.381 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":5,"skipped":1490,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:54:55.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:54:55.782: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:54:55.791: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:54:55.793: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:54:55.805: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:54:55.805: INFO: Container discover ready: false, restart count 0 Jun 17 23:54:55.805: INFO: Container init ready: false, restart count 0 Jun 17 23:54:55.805: INFO: Container install ready: false, restart count 0 Jun 17 23:54:55.805: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:54:55.805: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:54:55.805: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:54:55.805: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:54:55.805: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:54:55.805: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:54:55.805: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:54:55.805: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:54:55.805: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:54:55.805: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:54:55.805: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:54:55.805: INFO: Container collectd ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:54:55.805: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:54:55.805: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:54:55.805: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:54:55.805: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container grafana ready: true, restart count 0 Jun 17 23:54:55.805: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:54:55.805: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-13 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-13 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-14 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-14 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-15 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-15 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-16 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-16 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-17 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-17 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-18 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-18 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-19 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-19 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-2 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-2 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-3 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.805: INFO: Container overcommit-3 ready: true, restart count 0 Jun 17 23:54:55.805: INFO: overcommit-4 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.806: INFO: Container overcommit-4 ready: true, restart count 0 Jun 17 23:54:55.806: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:54:55.825: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:54:55.825: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:54:55.825: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:54:55.825: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:54:55.825: INFO: Container discover ready: false, restart count 0 Jun 17 23:54:55.825: INFO: Container init ready: false, restart count 0 Jun 17 23:54:55.825: INFO: Container install ready: false, restart count 0 Jun 17 23:54:55.825: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:54:55.825: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:54:55.825: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:54:55.825: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:54:55.825: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:54:55.825: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:54:55.825: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:54:55.825: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:54:55.825: INFO: Container collectd ready: true, restart count 0 Jun 17 23:54:55.825: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:54:55.825: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:54:55.825: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:54:55.825: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:54:55.825: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-0 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-0 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-1 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-1 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-10 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-10 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-11 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-11 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-12 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-12 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-5 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-5 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-6 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-6 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-7 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-7 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-8 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-8 ready: true, restart count 0 Jun 17 23:54:55.825: INFO: overcommit-9 from sched-pred-8634 started at 2022-06-17 23:54:39 +0000 UTC (1 container statuses recorded) Jun 17 23:54:55.825: INFO: Container overcommit-9 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4e4a0afe-f213-4733-9727-297c148816fe=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-eb66799f-eca1-4596-947c-4847f8290fcd testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da930ca2773], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2158/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9a3db19d1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9b46408e1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 277.388309ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9bbc7a461], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9c3960b9e], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98daa21097418], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f98daa221fe260], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-4e4a0afe-f213-4733-9727-297c148816fe: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f98daa221fe260], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-4e4a0afe-f213-4733-9727-297c148816fe: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da930ca2773], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2158/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9a3db19d1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9b46408e1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 277.388309ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9bbc7a461], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98da9c3960b9e], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f98daa21097418], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4e4a0afe-f213-4733-9727-297c148816fe=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16f98daa8ab69839], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2158/still-no-tolerations to node1] STEP: removing the label kubernetes.io/e2e-label-key-eb66799f-eca1-4596-947c-4847f8290fcd off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-eb66799f-eca1-4596-947c-4847f8290fcd STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4e4a0afe-f213-4733-9727-297c148816fe=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:55:01.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2158" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.187 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":6,"skipped":1863,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:55:01.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:55:01.973: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:55:01.981: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:55:01.983: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:55:01.990: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:55:01.990: INFO: Container discover ready: false, restart count 0 Jun 17 23:55:01.990: INFO: Container init ready: false, restart count 0 Jun 17 23:55:01.990: INFO: Container install ready: false, restart count 0 Jun 17 23:55:01.990: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:55:01.990: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:55:01.990: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:55:01.990: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:55:01.990: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:55:01.990: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:55:01.990: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:55:01.990: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:55:01.990: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:55:01.990: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:55:01.990: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:55:01.990: INFO: Container collectd ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:55:01.990: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:55:01.990: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:55:01.990: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:55:01.990: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container grafana ready: true, restart count 0 Jun 17 23:55:01.990: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:55:01.990: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:55:01.990: INFO: still-no-tolerations from sched-pred-2158 started at 2022-06-17 23:55:01 +0000 UTC (1 container statuses recorded) Jun 17 23:55:01.990: INFO: Container still-no-tolerations ready: false, restart count 0 Jun 17 23:55:01.990: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:55:02.001: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:55:02.001: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:55:02.001: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:55:02.001: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:55:02.001: INFO: Container discover ready: false, restart count 0 Jun 17 23:55:02.001: INFO: Container init ready: false, restart count 0 Jun 17 23:55:02.001: INFO: Container install ready: false, restart count 0 Jun 17 23:55:02.001: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:55:02.001: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:55:02.001: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:55:02.001: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:55:02.001: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:55:02.001: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:55:02.001: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:55:02.001: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:55:02.001: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:55:02.001: INFO: Container collectd ready: true, restart count 0 Jun 17 23:55:02.001: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:55:02.001: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:55:02.001: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:55:02.001: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:55:02.001: INFO: Container node-exporter ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:55:22.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1686" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.173 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":7,"skipped":2454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:55:22.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:55:22.157: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:55:22.165: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:55:22.167: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:55:22.175: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:55:22.175: INFO: Container discover ready: false, restart count 0 Jun 17 23:55:22.175: INFO: Container init ready: false, restart count 0 Jun 17 23:55:22.175: INFO: Container install ready: false, restart count 0 Jun 17 23:55:22.175: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:55:22.175: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:55:22.175: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:55:22.175: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:55:22.175: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:55:22.175: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:55:22.175: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:55:22.175: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:55:22.175: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:55:22.175: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:55:22.175: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:55:22.175: INFO: Container collectd ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:55:22.175: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:55:22.175: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:55:22.175: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:55:22.175: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container grafana ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:55:22.175: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:55:22.175: INFO: rs-e2e-pts-filter-54q94 from sched-pred-1686 started at 2022-06-17 23:55:16 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 17 23:55:22.175: INFO: rs-e2e-pts-filter-7djdg from sched-pred-1686 started at 2022-06-17 23:55:16 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.175: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 17 23:55:22.175: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:55:22.185: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:55:22.185: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:55:22.185: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:55:22.185: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:55:22.185: INFO: Container discover ready: false, restart count 0 Jun 17 23:55:22.185: INFO: Container init ready: false, restart count 0 Jun 17 23:55:22.185: INFO: Container install ready: false, restart count 0 Jun 17 23:55:22.185: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:55:22.185: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:55:22.185: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:55:22.185: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:55:22.185: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:55:22.185: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:55:22.185: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.185: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:55:22.185: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:55:22.185: INFO: Container collectd ready: true, restart count 0 Jun 17 23:55:22.185: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:55:22.186: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:55:22.186: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:55:22.186: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:55:22.186: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:55:22.186: INFO: rs-e2e-pts-filter-82qsf from sched-pred-1686 started at 2022-06-17 23:55:16 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.186: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 17 23:55:22.186: INFO: rs-e2e-pts-filter-wvx7h from sched-pred-1686 started at 2022-06-17 23:55:16 +0000 UTC (1 container statuses recorded) Jun 17 23:55:22.186: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a4af90c3-3480-46c9-92e5-6bf50d2a09a3 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a4af90c3-3480-46c9-92e5-6bf50d2a09a3 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a4af90c3-3480-46c9-92e5-6bf50d2a09a3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:55:38.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2629" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":8,"skipped":3177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:55:38.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 17 23:55:38.358: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 23:56:38.422: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:56:38.424: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:56:38.443: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:56:38.443: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:56:38.443: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:56:38.443: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:56:38.458: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:56:38.458: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:56:38.458: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:56:38.458: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:56:38.458: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:56:38.458: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:56:38.458: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:56:38.458: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:56:38.458: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:56:38.458: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:56:38.458: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:56:38.458: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:56:38.458: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:56:38.458: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:56:38.458: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:56:38.458: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:56:38.458: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:56:38.458: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:56:38.458: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:56:38.458: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:56:38.458: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:56:38.458: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Jun 17 23:56:42.503: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:56:42.503: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:56:42.503: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.503: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:56:42.503: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:56:42.503: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:56:42.503: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.503: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:56:42.503: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:56:42.503: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:56:42.504: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:56:42.504: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:56:42.504: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:56:42.504: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:56:42.504: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:56:42.504: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:56:42.504: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:56:42.504: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:56:42.504: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:56:42.504: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:56:42.504: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:56:42.504: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 17 23:56:42.504: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:56:42.504: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 Jun 17 23:56:42.515: INFO: Waiting for running... Jun 17 23:56:42.518: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:56:47.587: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:56:47.587: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:56:47.587: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.587: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:56:47.587: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:56:47.588: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:56:47.588: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:56:47.588: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:56:47.588: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:56:47.588: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:56:47.588: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: b5318b0d-0988-45d9-aed9-e38dc91d454e-0, Cpu: 45263, Mem: 105568540672 Jun 17 23:56:47.588: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Jun 17 23:56:47.588: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:56:47.588: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:56:47.588: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:56:47.588: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:56:47.588: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:56:47.588: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:56:47.588: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:56:47.588: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:56:47.588: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:56:47.588: INFO: Pod for on the node: 4d780395-d028-476d-a085-28f7cc829b26-0, Cpu: 45713, Mem: 106838400614 Jun 17 23:56:47.588: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 17 23:56:47.588: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Jun 17 23:56:47.588: INFO: Node: node2, totalRequestedMemResource: 107343345254, memAllocatableVal: 178884603904, memFraction: 0.6000703409422913 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:56:59.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3017" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:81.306 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":9,"skipped":3946,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:56:59.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 17 23:56:59.681: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 23:57:59.734: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:57:59.736: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 17 23:57:59.755: INFO: The status of Pod cmk-init-discover-node1-bvmrv is Succeeded, skipping waiting Jun 17 23:57:59.755: INFO: The status of Pod cmk-init-discover-node2-z2vgz is Succeeded, skipping waiting Jun 17 23:57:59.755: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 17 23:57:59.755: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 17 23:57:59.770: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:57:59.770: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.770: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:57:59.770: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:57:59.770: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:57:59.770: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:57:59.770: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:57:59.770: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.770: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:57:59.770: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:57:59.770: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:57:59.770: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:57:59.770: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:57:59.770: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.770: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:57:59.770: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:57:59.770: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:57:59.770: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.770: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.770: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:57:59.770: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:57:59.770: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Jun 17 23:57:59.787: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:57:59.787: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.787: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.787: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:57:59.787: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:57:59.787: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:57:59.787: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.787: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:57:59.787: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:57:59.787: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.787: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.787: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.787: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:57:59.788: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:57:59.788: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.788: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 Jun 17 23:57:59.788: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 Jun 17 23:57:59.788: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:57:59.788: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:57:59.788: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.788: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:57:59.788: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:57:59.788: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.788: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.788: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:57:59.788: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.788: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:57:59.788: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:57:59.788: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:57:59.788: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 Jun 17 23:57:59.788: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884603904, memFraction: 0.00282273951463695 Jun 17 23:57:59.802: INFO: Waiting for running... Jun 17 23:57:59.804: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:58:04.873: INFO: ComputeCPUMemFraction for node: node1 Jun 17 23:58:04.873: INFO: Pod for on the node: cmk-init-discover-node1-bvmrv, Cpu: 300, Mem: 629145600 Jun 17 23:58:04.873: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-qcmrd, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: cmk-xh247, Cpu: 200, Mem: 419430400 Jun 17 23:58:04.873: INFO: Pod for on the node: kube-flannel-wqcwq, Cpu: 150, Mem: 64000000 Jun 17 23:58:04.873: INFO: Pod for on the node: kube-multus-ds-amd64-m6vf8, Cpu: 100, Mem: 94371840 Jun 17 23:58:04.873: INFO: Pod for on the node: kube-proxy-t4lqk, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-26kg6, Cpu: 50, Mem: 64000000 Jun 17 23:58:04.873: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 17 23:58:04.873: INFO: Pod for on the node: node-feature-discovery-worker-dgp4b, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: collectd-5src2, Cpu: 300, Mem: 629145600 Jun 17 23:58:04.873: INFO: Pod for on the node: node-exporter-8ftgl, Cpu: 112, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 17 23:58:04.873: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: c0c60519-d225-4014-8eaf-0980a7d90ed9-0, Cpu: 37563, Mem: 87680079872 Jun 17 23:58:04.873: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 17 23:58:04.873: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 17 23:58:04.873: INFO: ComputeCPUMemFraction for node: node2 Jun 17 23:58:04.873: INFO: Pod for on the node: cmk-5gtjq, Cpu: 200, Mem: 419430400 Jun 17 23:58:04.873: INFO: Pod for on the node: cmk-init-discover-node2-z2vgz, Cpu: 300, Mem: 629145600 Jun 17 23:58:04.873: INFO: Pod for on the node: kube-flannel-plbl8, Cpu: 150, Mem: 64000000 Jun 17 23:58:04.873: INFO: Pod for on the node: kube-multus-ds-amd64-hblk4, Cpu: 100, Mem: 94371840 Jun 17 23:58:04.873: INFO: Pod for on the node: kube-proxy-pvtj6, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-w4nk8, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 17 23:58:04.873: INFO: Pod for on the node: node-feature-discovery-worker-82r46, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5, Cpu: 100, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: collectd-6bcqz, Cpu: 300, Mem: 629145600 Jun 17 23:58:04.873: INFO: Pod for on the node: node-exporter-xgz6d, Cpu: 112, Mem: 209715200 Jun 17 23:58:04.873: INFO: Pod for on the node: ed501068-1f73-4ddf-83af-1749134a178e-0, Cpu: 38013, Mem: 88949940224 Jun 17 23:58:04.873: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 17 23:58:04.873: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1560 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1560, will wait for the garbage collector to delete the pods Jun 17 23:58:11.064: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.247898ms Jun 17 23:58:11.164: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.391365ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:58:29.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1560" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:90.038 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":10,"skipped":4909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:58:29.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 17 23:58:29.725: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 17 23:58:29.733: INFO: Waiting for terminating namespaces to be deleted... Jun 17 23:58:29.735: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 17 23:58:29.742: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 17 23:58:29.742: INFO: Container discover ready: false, restart count 0 Jun 17 23:58:29.742: INFO: Container init ready: false, restart count 0 Jun 17 23:58:29.742: INFO: Container install ready: false, restart count 0 Jun 17 23:58:29.742: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container cmk-webhook ready: true, restart count 0 Jun 17 23:58:29.742: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 17 23:58:29.742: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:58:29.742: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:58:29.742: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:58:29.742: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:58:29.742: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 17 23:58:29.742: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:58:29.742: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:58:29.742: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:58:29.742: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:58:29.742: INFO: Container collectd ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:58:29.742: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:58:29.742: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container node-exporter ready: true, restart count 0 Jun 17 23:58:29.742: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 17 23:58:29.742: INFO: Container config-reloader ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container grafana ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Container prometheus ready: true, restart count 1 Jun 17 23:58:29.742: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.742: INFO: Container tas-extender ready: true, restart count 0 Jun 17 23:58:29.742: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 17 23:58:29.761: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 17 23:58:29.761: INFO: Container nodereport ready: true, restart count 0 Jun 17 23:58:29.761: INFO: Container reconcile ready: true, restart count 0 Jun 17 23:58:29.761: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 17 23:58:29.761: INFO: Container discover ready: false, restart count 0 Jun 17 23:58:29.761: INFO: Container init ready: false, restart count 0 Jun 17 23:58:29.761: INFO: Container install ready: false, restart count 0 Jun 17 23:58:29.761: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container kube-flannel ready: true, restart count 2 Jun 17 23:58:29.761: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container kube-multus ready: true, restart count 1 Jun 17 23:58:29.761: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container kube-proxy ready: true, restart count 2 Jun 17 23:58:29.761: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 17 23:58:29.761: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container nginx-proxy ready: true, restart count 2 Jun 17 23:58:29.761: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container nfd-worker ready: true, restart count 0 Jun 17 23:58:29.761: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 17 23:58:29.761: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 17 23:58:29.761: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 17 23:58:29.761: INFO: Container collectd ready: true, restart count 0 Jun 17 23:58:29.761: INFO: Container collectd-exporter ready: true, restart count 0 Jun 17 23:58:29.761: INFO: Container rbac-proxy ready: true, restart count 0 Jun 17 23:58:29.761: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 17 23:58:29.761: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 17 23:58:29.761: INFO: Container node-exporter ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-23a018c0-a40a-4fa5-a9c4-4c642f0333ce 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-23a018c0-a40a-4fa5-a9c4-4c642f0333ce off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-23a018c0-a40a-4fa5-a9c4-4c642f0333ce [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 17 23:58:37.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5356" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.145 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":11,"skipped":5628,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 17 23:58:37.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 17 23:58:37.880: INFO: Waiting up to 1m0s for all nodes to be ready Jun 17 23:59:37.943: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:00:12.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4143" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:94.402 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":12,"skipped":5658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 18 00:00:12.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 18 00:00:12.278: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 18 00:00:12.289: INFO: Waiting for terminating namespaces to be deleted... Jun 18 00:00:12.291: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 18 00:00:12.302: INFO: cmk-init-discover-node1-bvmrv from kube-system started at 2022-06-17 20:13:02 +0000 UTC (3 container statuses recorded) Jun 18 00:00:12.302: INFO: Container discover ready: false, restart count 0 Jun 18 00:00:12.302: INFO: Container init ready: false, restart count 0 Jun 18 00:00:12.302: INFO: Container install ready: false, restart count 0 Jun 18 00:00:12.302: INFO: cmk-webhook-6c9d5f8578-qcmrd from kube-system started at 2022-06-17 20:13:52 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.302: INFO: Container cmk-webhook ready: true, restart count 0 Jun 18 00:00:12.302: INFO: cmk-xh247 from kube-system started at 2022-06-17 20:13:51 +0000 UTC (2 container statuses recorded) Jun 18 00:00:12.302: INFO: Container nodereport ready: true, restart count 0 Jun 18 00:00:12.302: INFO: Container reconcile ready: true, restart count 0 Jun 18 00:00:12.302: INFO: kube-flannel-wqcwq from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.302: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:00:12.302: INFO: kube-multus-ds-amd64-m6vf8 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.302: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:00:12.303: INFO: kube-proxy-t4lqk from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container kube-proxy ready: true, restart count 2 Jun 18 00:00:12.303: INFO: kubernetes-dashboard-785dcbb76d-26kg6 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container kubernetes-dashboard ready: true, restart count 2 Jun 18 00:00:12.303: INFO: nginx-proxy-node1 from kube-system started at 2022-06-17 20:00:39 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container nginx-proxy ready: true, restart count 2 Jun 18 00:00:12.303: INFO: node-feature-discovery-worker-dgp4b from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container nfd-worker ready: true, restart count 0 Jun 18 00:00:12.303: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-whtq2 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 18 00:00:12.303: INFO: collectd-5src2 from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 18 00:00:12.303: INFO: Container collectd ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Container collectd-exporter ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Container rbac-proxy ready: true, restart count 0 Jun 18 00:00:12.303: INFO: node-exporter-8ftgl from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 18 00:00:12.303: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:00:12.303: INFO: prometheus-k8s-0 from monitoring started at 2022-06-17 20:14:56 +0000 UTC (4 container statuses recorded) Jun 18 00:00:12.303: INFO: Container config-reloader ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Container grafana ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Container prometheus ready: true, restart count 1 Jun 18 00:00:12.303: INFO: tas-telemetry-aware-scheduling-84ff454dfb-tbvjv from monitoring started at 2022-06-17 20:17:57 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container tas-extender ready: true, restart count 0 Jun 18 00:00:12.303: INFO: high from sched-preemption-4143 started at 2022-06-17 23:59:49 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.303: INFO: Container high ready: true, restart count 0 Jun 18 00:00:12.303: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 18 00:00:12.320: INFO: cmk-5gtjq from kube-system started at 2022-06-17 20:13:52 +0000 UTC (2 container statuses recorded) Jun 18 00:00:12.320: INFO: Container nodereport ready: true, restart count 0 Jun 18 00:00:12.320: INFO: Container reconcile ready: true, restart count 0 Jun 18 00:00:12.320: INFO: cmk-init-discover-node2-z2vgz from kube-system started at 2022-06-17 20:13:25 +0000 UTC (3 container statuses recorded) Jun 18 00:00:12.320: INFO: Container discover ready: false, restart count 0 Jun 18 00:00:12.320: INFO: Container init ready: false, restart count 0 Jun 18 00:00:12.320: INFO: Container install ready: false, restart count 0 Jun 18 00:00:12.320: INFO: kube-flannel-plbl8 from kube-system started at 2022-06-17 20:01:38 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.320: INFO: Container kube-flannel ready: true, restart count 2 Jun 18 00:00:12.320: INFO: kube-multus-ds-amd64-hblk4 from kube-system started at 2022-06-17 20:01:47 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.320: INFO: Container kube-multus ready: true, restart count 1 Jun 18 00:00:12.320: INFO: kube-proxy-pvtj6 from kube-system started at 2022-06-17 20:00:43 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.320: INFO: Container kube-proxy ready: true, restart count 2 Jun 18 00:00:12.320: INFO: kubernetes-metrics-scraper-5558854cb-w4nk8 from kube-system started at 2022-06-17 20:02:19 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.320: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 18 00:00:12.320: INFO: nginx-proxy-node2 from kube-system started at 2022-06-17 20:00:37 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.320: INFO: Container nginx-proxy ready: true, restart count 2 Jun 18 00:00:12.320: INFO: node-feature-discovery-worker-82r46 from kube-system started at 2022-06-17 20:09:28 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.321: INFO: Container nfd-worker ready: true, restart count 0 Jun 18 00:00:12.321: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xr9c5 from kube-system started at 2022-06-17 20:10:41 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.321: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 18 00:00:12.321: INFO: collectd-6bcqz from monitoring started at 2022-06-17 20:18:47 +0000 UTC (3 container statuses recorded) Jun 18 00:00:12.321: INFO: Container collectd ready: true, restart count 0 Jun 18 00:00:12.321: INFO: Container collectd-exporter ready: true, restart count 0 Jun 18 00:00:12.321: INFO: Container rbac-proxy ready: true, restart count 0 Jun 18 00:00:12.321: INFO: node-exporter-xgz6d from monitoring started at 2022-06-17 20:14:54 +0000 UTC (2 container statuses recorded) Jun 18 00:00:12.321: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 18 00:00:12.321: INFO: Container node-exporter ready: true, restart count 0 Jun 18 00:00:12.321: INFO: low-1 from sched-preemption-4143 started at 2022-06-17 23:59:54 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.321: INFO: Container low-1 ready: true, restart count 0 Jun 18 00:00:12.321: INFO: medium from sched-preemption-4143 started at 2022-06-18 00:00:07 +0000 UTC (1 container statuses recorded) Jun 18 00:00:12.321: INFO: Container medium ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f98df2e2835377], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 18 00:00:13.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6646" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":13,"skipped":5751,"failed":0} SSSSSSSSSJun 18 00:00:13.367: INFO: Running AfterSuite actions on all nodes Jun 18 00:00:13.367: INFO: Running AfterSuite actions on node 1 Jun 18 00:00:13.367: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 532.490 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 8m53.869158968s Test Suite Passed