I1030 04:56:52.577444 21 e2e.go:129] Starting e2e run "c3b3b8f9-745d-4f9a-a43b-515355b9d786" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635569811 - Will randomize all specs Will run 13 of 5770 specs Oct 30 04:56:52.613: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:56:52.618: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 04:56:52.649: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 04:56:52.714: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 04:56:52.714: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 04:56:52.714: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 04:56:52.714: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 04:56:52.714: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 04:56:52.731: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 04:56:52.731: INFO: e2e test version: v1.21.5 Oct 30 04:56:52.732: INFO: kube-apiserver version: v1.21.1 Oct 30 04:56:52.732: INFO: >>> kubeConfig: /root/.kube/config Oct 30 04:56:52.737: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:56:52.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1030 04:56:52.771356 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 04:56:52.771: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 04:56:52.775: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 04:56:52.777: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 04:56:52.785: INFO: Waiting for terminating namespaces to be deleted... Oct 30 04:56:52.788: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 04:56:52.804: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 04:56:52.804: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:56:52.804: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:56:52.804: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 04:56:52.804: INFO: Container discover ready: false, restart count 0 Oct 30 04:56:52.805: INFO: Container init ready: false, restart count 0 Oct 30 04:56:52.805: INFO: Container install ready: false, restart count 0 Oct 30 04:56:52.805: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:56:52.805: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:56:52.805: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:56:52.805: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 04:56:52.805: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:56:52.805: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:56:52.805: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:56:52.805: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 04:56:52.805: INFO: Container collectd ready: true, restart count 0 Oct 30 04:56:52.805: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:56:52.805: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:56:52.805: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 04:56:52.805: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:56:52.805: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:56:52.805: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 04:56:52.805: INFO: Container config-reloader ready: true, restart count 0 Oct 30 04:56:52.805: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 04:56:52.805: INFO: Container grafana ready: true, restart count 0 Oct 30 04:56:52.805: INFO: Container prometheus ready: true, restart count 1 Oct 30 04:56:52.805: INFO: back-off-cap from pods-3666 started at 2021-10-30 04:29:08 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.805: INFO: Container back-off-cap ready: false, restart count 10 Oct 30 04:56:52.805: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 04:56:52.818: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 04:56:52.818: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:56:52.818: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:56:52.818: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 04:56:52.818: INFO: Container discover ready: false, restart count 0 Oct 30 04:56:52.818: INFO: Container init ready: false, restart count 0 Oct 30 04:56:52.818: INFO: Container install ready: false, restart count 0 Oct 30 04:56:52.818: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 04:56:52.818: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 04:56:52.818: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:56:52.818: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:56:52.818: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 04:56:52.818: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:56:52.818: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:56:52.818: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:56:52.818: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 04:56:52.818: INFO: Container collectd ready: true, restart count 0 Oct 30 04:56:52.818: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:56:52.818: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:56:52.818: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 04:56:52.818: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:56:52.818: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:56:52.818: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 04:56:52.818: INFO: Container tas-extender ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Oct 30 04:56:52.855: INFO: Pod cmk-89lqq requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod cmk-8bpbf requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod cmk-webhook-6c9d5f8578-ffk66 requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod kube-flannel-f6s5v requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod kube-flannel-phg88 requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod kube-multus-ds-amd64-68wrz requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod kube-multus-ds-amd64-7tvbl requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod kube-proxy-76285 requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod kube-proxy-z5hqt requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod kubernetes-dashboard-785dcbb76d-pbjjt requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod kubernetes-metrics-scraper-5558854cb-5rmjw requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod node-feature-discovery-worker-h6lcp requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod node-feature-discovery-worker-w5vdb requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-t789r requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod collectd-d45rv requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod collectd-flvhl requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod node-exporter-256wm requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod node-exporter-r77s4 requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Oct 30 04:56:52.855: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-989mh requesting local ephemeral resource =0 on Node node2 Oct 30 04:56:52.855: INFO: Using pod capacity: 40542413347 Oct 30 04:56:52.855: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Oct 30 04:56:52.855: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Oct 30 04:56:53.075: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b61445f18c5b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b6159e620cd9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b615bd8d1b5d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 522.90613ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b615e4d7a897], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b6160f6f59f8], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b6144834759f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b614c560f46a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b614f27ed9d3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 756.927282ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b61539f70f83], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b615b8606c0e], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b6144cec8c9d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b615b26af87d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b615ca691f62], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 402.52551ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b615ec67a71c], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b6161f546cbd], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b6144d80466e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b61548842408], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b6155f368808], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 380.782472ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b61578ffa3c9], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b615b1826aac], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b6144e0391a6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b61638fa9a9c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b6164c08ecd5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 319.698363ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b61658b88eb0], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b6165f48a2f9], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b6144e8acee4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b615b582b4e6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b615de57816b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 685.028179ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b615fefaa687], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b6163a184930], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b6144f173738], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b6164f9be861], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b61662691fac], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 315.434299ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b61668b5392c], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b6166f9bd5b8], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b6144f9cc5df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-15 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b61651d83861], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b616760b86c4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 607.329001ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b6167c9356c9], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b61683da8728], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b61450403e48], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b61655d36a88], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b616a445f97d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.316123436s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b616aaf5b4d5], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b616b393212b], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b61450cead47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b616556c9fee], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b6167dd2603c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 677.750843ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b6168487b637], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b6169a646f36], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b6145152b8cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b61655c1b722], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b61690cbda13], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 990.509219ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b61699c73fe6], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b616a1b36b7e], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b61451d42d16], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b61653055166], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b6166a20923b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 387.656673ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b61670a1fe8e], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b6167a84a860], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b61448b77b7f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b6159fbbaf34], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b615d6025c7d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 910.595012ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b6160f749ce6], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b6164f6fad87], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b6144946f630], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b61603f5e7cd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b6163fb8028a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.002565891s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b6165470ddd7], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b6165c46d518], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b61449cbab94], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b6157702dd15], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b6158d0bbb97], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 369.672454ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b615accb77ba], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b6160b83f7b7], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b6144a5233c8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b6157732580b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b615a2016db0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 718.207775ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b615af85386c], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b6160c02e930], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b6144ad6950f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b614c7ed57ee], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b614ee3fa419], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 642.92023ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b61561e9bd42], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b615acd466db], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b6144b5807e0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b6160156442e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b61616b48bd1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 358.492493ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b6163bce4ea3], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b616577c0129], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b6144bde4200], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b61603c13519], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b6162b9a9e77], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 668.555134ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b6164870a667], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b616584689d5], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b6144c6af6cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1456/overcommit-9 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b61554d604d7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b6157914810b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 608.068108ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b615aecf62c4], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b615ff0a5500], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b2b617d40aba59], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:57:09.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1456" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.429 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":1,"skipped":467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:57:09.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 04:57:09.211: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 04:57:09.218: INFO: Waiting for terminating namespaces to be deleted... Oct 30 04:57:09.221: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 04:57:09.230: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 04:57:09.230: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:57:09.230: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 04:57:09.230: INFO: Container discover ready: false, restart count 0 Oct 30 04:57:09.230: INFO: Container init ready: false, restart count 0 Oct 30 04:57:09.230: INFO: Container install ready: false, restart count 0 Oct 30 04:57:09.230: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 04:57:09.230: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:57:09.230: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:57:09.230: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 04:57:09.230: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:57:09.230: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:57:09.230: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:57:09.230: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 04:57:09.230: INFO: Container collectd ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:57:09.230: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 04:57:09.230: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:57:09.230: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 04:57:09.230: INFO: Container config-reloader ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container grafana ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Container prometheus ready: true, restart count 1 Oct 30 04:57:09.230: INFO: overcommit-1 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-1 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-10 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-10 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-11 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-11 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-12 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-12 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-13 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-13 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-16 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-16 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-17 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-17 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-18 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-18 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-19 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-19 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: overcommit-9 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.230: INFO: Container overcommit-9 ready: true, restart count 0 Oct 30 04:57:09.230: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 04:57:09.241: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 04:57:09.241: INFO: Container nodereport ready: true, restart count 0 Oct 30 04:57:09.241: INFO: Container reconcile ready: true, restart count 0 Oct 30 04:57:09.241: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 04:57:09.241: INFO: Container discover ready: false, restart count 0 Oct 30 04:57:09.241: INFO: Container init ready: false, restart count 0 Oct 30 04:57:09.241: INFO: Container install ready: false, restart count 0 Oct 30 04:57:09.241: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 04:57:09.241: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 04:57:09.241: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container kube-multus ready: true, restart count 1 Oct 30 04:57:09.241: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 04:57:09.241: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 04:57:09.241: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 04:57:09.241: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 04:57:09.241: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 04:57:09.241: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 04:57:09.241: INFO: Container collectd ready: true, restart count 0 Oct 30 04:57:09.241: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 04:57:09.241: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 04:57:09.241: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 04:57:09.241: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 04:57:09.241: INFO: Container node-exporter ready: true, restart count 0 Oct 30 04:57:09.241: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container tas-extender ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-0 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-0 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-14 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-14 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-15 from sched-pred-1456 started at 2021-10-30 04:56:53 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-15 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-2 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-2 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-3 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-3 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-4 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-4 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-5 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-5 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-6 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-6 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-7 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-7 ready: true, restart count 0 Oct 30 04:57:09.241: INFO: overcommit-8 from sched-pred-1456 started at 2021-10-30 04:56:52 +0000 UTC (1 container statuses recorded) Oct 30 04:57:09.241: INFO: Container overcommit-8 ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6.16b2b61bd6ae7cd3], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6.16b2b61cbdda0eaa], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9559/filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6.16b2b61d12de3bcf], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6.16b2b61d2577c048], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 312.045963ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6.16b2b61d2b8836a0], Reason = [Created], Message = [Created container filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6] STEP: Considering event: Type = [Normal], Name = [filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6.16b2b61d32afe943], Reason = [Started], Message = [Started container filler-pod-6f0b29e4-d642-4eeb-b7a9-6be3b77ba6f6] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b6198085a416], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9559/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b61b13e290ec], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b61b256b5c91], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 294.170696ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b61b2c961d09], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b61b333bc5b2], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b61bd5353b5d], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod97a9553a-1d45-4137-99b8-c653531a815b.16b2b61db4c17af6], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:57:34.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9559" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:25.241 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":2,"skipped":739,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:57:34.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 04:57:34.456: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 04:58:34.511: INFO: Waiting for terminating namespaces to be deleted... Oct 30 04:58:34.513: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 04:58:34.532: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 04:58:34.532: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 04:58:34.532: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 04:58:34.532: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 04:58:34.550: INFO: ComputeCPUMemFraction for node: node1 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 04:58:34.550: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 04:58:34.550: INFO: ComputeCPUMemFraction for node: node2 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:34.550: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 04:58:34.550: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Oct 30 04:58:42.652: INFO: ComputeCPUMemFraction for node: node2 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 04:58:42.652: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 04:58:42.652: INFO: ComputeCPUMemFraction for node: node1 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 04:58:42.652: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 04:58:42.652: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 04:58:42.663: INFO: Waiting for running... Oct 30 04:58:42.667: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 04:58:52.734: INFO: ComputeCPUMemFraction for node: node2 Oct 30 04:58:52.734: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 04:58:52.735: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 04:58:52.735: INFO: ComputeCPUMemFraction for node: node1 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Pod for on the node: 80f1e555-3e2f-41ad-931a-cb5ff236a597-0, Cpu: 38400, Mem: 89350041600 Oct 30 04:58:52.735: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 04:58:52.735: INFO: Node: node1, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884632576, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 04:59:14.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9913" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:100.387 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":3,"skipped":1154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 04:59:14.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 04:59:14.854: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:00:14.911: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:00:57.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2781" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:102.382 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":4,"skipped":1402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:00:57.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:00:57.234: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:01:57.291: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:01:57.293: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:01:57.315: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:01:57.315: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:01:57.315: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:01:57.315: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:01:57.329: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:01:57.329: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.329: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:01:57.330: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:01:57.330: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.330: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:01:57.330: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Oct 30 05:01:57.347: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:01:57.347: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:01:57.347: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:01:57.347: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:01:57.347: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:01:57.363: INFO: Waiting for running... Oct 30 05:01:57.364: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:02:02.435: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:02:02.435: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:02:02.436: INFO: Node: node1, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:02:02.436: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Pod for on the node: 61a8c02a-d4dd-4736-80a5-3897dec363a3-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:02:02.436: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:02:02.436: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7867 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7867, will wait for the garbage collector to delete the pods Oct 30 05:02:08.615: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 3.801497ms Oct 30 05:02:08.715: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.599263ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:02:23.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7867" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:86.028 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":5,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:02:23.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:02:23.265: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:03:23.318: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:03:23.321: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:03:23.339: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:03:23.339: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:03:23.339: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:03:23.339: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:03:23.355: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:03:23.355: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:03:23.355: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:03:23.355: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:03:23.355: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Oct 30 05:03:27.398: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:03:27.399: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:03:27.399: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:27.399: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:03:27.399: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:03:27.410: INFO: Waiting for running... Oct 30 05:03:27.414: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:03:32.492: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:03:32.492: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:03:32.492: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:03:32.492: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:03:32.492: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:03:44.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3243" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:81.304 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":6,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:03:44.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:03:44.577: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:03:44.585: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:03:44.587: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:03:44.596: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:03:44.596: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:03:44.596: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:03:44.596: INFO: Container discover ready: false, restart count 0 Oct 30 05:03:44.596: INFO: Container init ready: false, restart count 0 Oct 30 05:03:44.596: INFO: Container install ready: false, restart count 0 Oct 30 05:03:44.596: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:03:44.596: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:03:44.596: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:03:44.596: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:03:44.596: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:03:44.596: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:03:44.596: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:03:44.596: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:03:44.596: INFO: Container collectd ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:03:44.596: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:03:44.596: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:03:44.596: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:03:44.596: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container grafana ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:03:44.596: INFO: pod-with-pod-antiaffinity from sched-priority-3243 started at 2021-10-30 05:03:32 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.596: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Oct 30 05:03:44.596: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:03:44.609: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:03:44.609: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:03:44.609: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:03:44.609: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:03:44.609: INFO: Container discover ready: false, restart count 0 Oct 30 05:03:44.609: INFO: Container init ready: false, restart count 0 Oct 30 05:03:44.609: INFO: Container install ready: false, restart count 0 Oct 30 05:03:44.609: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:03:44.609: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:03:44.609: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:03:44.609: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:03:44.609: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:03:44.609: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:03:44.609: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:03:44.609: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:03:44.609: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:03:44.609: INFO: Container collectd ready: true, restart count 0 Oct 30 05:03:44.609: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:03:44.609: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:03:44.609: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:03:44.609: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:03:44.609: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:03:44.609: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:03:44.609: INFO: pod-with-label-security-s1 from sched-priority-3243 started at 2021-10-30 05:03:23 +0000 UTC (1 container statuses recorded) Oct 30 05:03:44.609: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e6a7b07e-4188-4942-bd22-f0bbc61bebdc 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e6a7b07e-4188-4942-bd22-f0bbc61bebdc off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e6a7b07e-4188-4942-bd22-f0bbc61bebdc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:04:00.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1325" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":7,"skipped":2542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:04:00.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:04:00.771: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:04:00.780: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:04:00.784: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:04:00.792: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:04:00.792: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:00.792: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:04:00.792: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:00.792: INFO: Container init ready: false, restart count 0 Oct 30 05:04:00.792: INFO: Container install ready: false, restart count 0 Oct 30 05:04:00.792: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:04:00.792: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:00.792: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:00.792: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:04:00.792: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:00.792: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:00.792: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.792: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:00.792: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:00.792: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:00.792: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:00.792: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:00.792: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:04:00.792: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container grafana ready: true, restart count 0 Oct 30 05:04:00.792: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:04:00.792: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:04:00.802: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:04:00.802: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:00.802: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:00.802: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:04:00.802: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:00.802: INFO: Container init ready: false, restart count 0 Oct 30 05:04:00.802: INFO: Container install ready: false, restart count 0 Oct 30 05:04:00.802: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:04:00.802: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:04:00.802: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:00.802: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:00.802: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:04:00.802: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:00.802: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:00.802: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:00.802: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:00.802: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:00.802: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:00.802: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:00.802: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:00.802: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:00.802: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:00.802: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:04:00.802: INFO: pod1 from sched-pred-1325 started at 2021-10-30 05:03:48 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container agnhost ready: true, restart count 0 Oct 30 05:04:00.802: INFO: pod2 from sched-pred-1325 started at 2021-10-30 05:03:52 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container agnhost ready: true, restart count 0 Oct 30 05:04:00.802: INFO: pod3 from sched-pred-1325 started at 2021-10-30 05:03:56 +0000 UTC (1 container statuses recorded) Oct 30 05:04:00.802: INFO: Container agnhost ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-01a3c7a3-d04f-467d-91af-360c63b5f3df=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-4baed9c5-51d8-4adb-b6c6-ec1dc7735404 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-4baed9c5-51d8-4adb-b6c6-ec1dc7735404 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-4baed9c5-51d8-4adb-b6c6-ec1dc7735404 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-01a3c7a3-d04f-467d-91af-360c63b5f3df=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:04:08.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6146" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.170 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":8,"skipped":3539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:04:08.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:04:08.940: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:04:08.948: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:04:08.951: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:04:08.961: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:04:08.961: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:08.961: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:04:08.961: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:08.961: INFO: Container init ready: false, restart count 0 Oct 30 05:04:08.961: INFO: Container install ready: false, restart count 0 Oct 30 05:04:08.961: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:04:08.961: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:08.961: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:08.961: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:04:08.961: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:08.961: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:08.961: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:08.961: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:08.961: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:08.961: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:08.961: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:08.961: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:04:08.961: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container grafana ready: true, restart count 0 Oct 30 05:04:08.961: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:04:08.961: INFO: with-tolerations from sched-pred-6146 started at 2021-10-30 05:04:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.961: INFO: Container with-tolerations ready: true, restart count 0 Oct 30 05:04:08.962: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:04:08.970: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:04:08.970: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:08.970: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:08.970: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:04:08.970: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:08.970: INFO: Container init ready: false, restart count 0 Oct 30 05:04:08.970: INFO: Container install ready: false, restart count 0 Oct 30 05:04:08.970: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:04:08.970: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:04:08.970: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:08.970: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:08.970: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:04:08.970: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:08.970: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:08.970: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:08.970: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:08.970: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:08.970: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:08.970: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:08.970: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:08.970: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:08.970: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:08.970: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:04:08.970: INFO: pod1 from sched-pred-1325 started at 2021-10-30 05:03:48 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container agnhost ready: false, restart count 0 Oct 30 05:04:08.970: INFO: pod2 from sched-pred-1325 started at 2021-10-30 05:03:52 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container agnhost ready: false, restart count 0 Oct 30 05:04:08.970: INFO: pod3 from sched-pred-1325 started at 2021-10-30 05:03:56 +0000 UTC (1 container statuses recorded) Oct 30 05:04:08.970: INFO: Container agnhost ready: false, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7d6f94a8-4e73-45fd-bfa8-e61584294bb3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7d6f94a8-4e73-45fd-bfa8-e61584294bb3 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7d6f94a8-4e73-45fd-bfa8-e61584294bb3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:04:19.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1989" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.126 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":9,"skipped":3611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:04:19.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:04:19.083: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:04:19.091: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:04:19.093: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:04:19.102: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:04:19.102: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:19.102: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:04:19.102: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:19.102: INFO: Container init ready: false, restart count 0 Oct 30 05:04:19.102: INFO: Container install ready: false, restart count 0 Oct 30 05:04:19.102: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:04:19.102: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:19.102: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:19.102: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:04:19.102: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:19.102: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:19.102: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:19.102: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:19.102: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:19.102: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:19.102: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:19.102: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:04:19.102: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container grafana ready: true, restart count 0 Oct 30 05:04:19.102: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:04:19.102: INFO: with-labels from sched-pred-1989 started at 2021-10-30 05:04:13 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container with-labels ready: true, restart count 0 Oct 30 05:04:19.102: INFO: with-tolerations from sched-pred-6146 started at 2021-10-30 05:04:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.102: INFO: Container with-tolerations ready: false, restart count 0 Oct 30 05:04:19.102: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:04:19.109: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:04:19.109: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:19.109: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:19.109: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:04:19.109: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:19.109: INFO: Container init ready: false, restart count 0 Oct 30 05:04:19.109: INFO: Container install ready: false, restart count 0 Oct 30 05:04:19.109: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:04:19.109: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:04:19.109: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:19.109: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:19.109: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:04:19.109: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:19.109: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:19.109: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.109: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:19.109: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:19.109: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:19.109: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:19.109: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:19.109: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:19.109: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:19.109: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:19.110: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:04:19.110: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-303ebf12-d7a9-43e4-91c4-af0b07183ec9=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-e85fd308-6569-4bf6-b54e-785d99d0fedd testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c2ca1c8ac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1073/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c857e1805], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c977be724], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 301.839295ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c9e235272], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67ca7a9316a], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67d1be14014], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b2b67d1de54ba5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-303ebf12-d7a9-43e4-91c4-af0b07183ec9: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b2b67d1de54ba5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-303ebf12-d7a9-43e4-91c4-af0b07183ec9: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c2ca1c8ac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1073/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c857e1805], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c977be724], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 301.839295ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67c9e235272], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67ca7a9316a], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b67d1be14014], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-303ebf12-d7a9-43e4-91c4-af0b07183ec9=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b2b67d6c399390], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1073/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-e85fd308-6569-4bf6-b54e-785d99d0fedd off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-e85fd308-6569-4bf6-b54e-785d99d0fedd STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-303ebf12-d7a9-43e4-91c4-af0b07183ec9=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:04:25.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1073" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":10,"skipped":4515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:04:25.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:04:25.253: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:04:25.261: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:04:25.263: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:04:25.273: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:04:25.273: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:25.273: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:04:25.273: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:25.273: INFO: Container init ready: false, restart count 0 Oct 30 05:04:25.273: INFO: Container install ready: false, restart count 0 Oct 30 05:04:25.273: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:04:25.273: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:25.273: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:25.273: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:04:25.273: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:25.273: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:25.273: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:25.273: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:25.273: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:25.273: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:25.273: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:25.273: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:04:25.273: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container grafana ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:04:25.273: INFO: with-labels from sched-pred-1989 started at 2021-10-30 05:04:13 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.273: INFO: Container with-labels ready: true, restart count 0 Oct 30 05:04:25.273: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:04:25.280: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:04:25.280: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:25.280: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:25.280: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:04:25.280: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:25.280: INFO: Container init ready: false, restart count 0 Oct 30 05:04:25.280: INFO: Container install ready: false, restart count 0 Oct 30 05:04:25.280: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:04:25.280: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:04:25.280: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:25.280: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:25.280: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:04:25.280: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:25.280: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:25.280: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:25.280: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:25.280: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:25.280: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:25.280: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:25.280: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:25.280: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:25.280: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:25.280: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:04:25.280: INFO: still-no-tolerations from sched-pred-1073 started at 2021-10-30 05:04:24 +0000 UTC (1 container statuses recorded) Oct 30 05:04:25.280: INFO: Container still-no-tolerations ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b2b67d9d165eaf], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:04:26.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5492" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":11,"skipped":4738,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:04:26.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:04:26.344: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:04:26.358: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:04:26.363: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:04:26.375: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:04:26.376: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:26.376: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:04:26.376: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:26.376: INFO: Container init ready: false, restart count 0 Oct 30 05:04:26.376: INFO: Container install ready: false, restart count 0 Oct 30 05:04:26.376: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:04:26.376: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:26.376: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:26.376: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:04:26.376: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:26.376: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:26.376: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:26.376: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:26.376: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:26.376: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:26.376: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:26.376: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:04:26.376: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container grafana ready: true, restart count 0 Oct 30 05:04:26.376: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:04:26.376: INFO: with-labels from sched-pred-1989 started at 2021-10-30 05:04:13 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.376: INFO: Container with-labels ready: false, restart count 0 Oct 30 05:04:26.376: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:04:26.391: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:04:26.391: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:04:26.391: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:04:26.391: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:04:26.391: INFO: Container discover ready: false, restart count 0 Oct 30 05:04:26.391: INFO: Container init ready: false, restart count 0 Oct 30 05:04:26.391: INFO: Container install ready: false, restart count 0 Oct 30 05:04:26.391: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:04:26.391: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:04:26.391: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:04:26.391: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:04:26.391: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:04:26.391: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:04:26.391: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:04:26.391: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:04:26.391: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:04:26.391: INFO: Container collectd ready: true, restart count 0 Oct 30 05:04:26.391: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:04:26.391: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:04:26.391: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:04:26.391: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:04:26.391: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:04:26.391: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:04:26.391: INFO: still-no-tolerations from sched-pred-1073 started at 2021-10-30 05:04:24 +0000 UTC (1 container statuses recorded) Oct 30 05:04:26.391: INFO: Container still-no-tolerations ready: false, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:04:46.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1029" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:20.180 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":12,"skipped":4754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:04:46.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:04:46.541: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:05:46.600: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:05:46.602: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:05:46.620: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:05:46.620: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:05:46.620: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:05:46.620: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:05:46.637: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:05:46.637: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:05:46.637: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.637: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:05:46.637: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Oct 30 05:05:46.654: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:05:46.654: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:05:46.654: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:05:46.654: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:05:46.654: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:05:46.670: INFO: Waiting for running... Oct 30 05:05:46.671: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:05:51.740: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.740: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:05:51.740: INFO: Node: node1, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:05:51.740: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Pod for on the node: 174aa538-c1f6-469c-a8da-035e721ce318-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:05:51.741: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:05:51.741: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-154887ff-8a46-4e27-839a=testing-taint-value-c0acbe76-8f52-434c-ba87-422a7a6fbf7b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5c429fe4-d141-49c8-b091=testing-taint-value-6164de0f-6cda-4265-a35e-c0ded26de003:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bf438b8a-8b10-461d-89eb=testing-taint-value-d78c08a4-5aad-44c3-b2a6-a559ac1dcda2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b9340210-fa42-4182-9bfb=testing-taint-value-d31739a2-c4b0-4fe2-bde6-a0ec2f2e6cdd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4408bb3c-720f-4484-bfb7=testing-taint-value-88d992b1-8300-44d5-89ef-3287da3db01f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bda73033-0eee-40af-a4fe=testing-taint-value-e0e5a529-0670-4f1c-9962-e00001cd2f2d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0e4cc655-40b0-422a-ab72=testing-taint-value-afa62b87-abdd-4613-a8d1-73befa25a246:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-21863fa7-faf1-47e0-bd3e=testing-taint-value-f299319b-23aa-445d-87d9-61f4f1456039:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8ec87d34-3496-4b47-87b5=testing-taint-value-0cc55c76-cf73-40a6-8f21-eb71bd9e133c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7a6218ef-17a9-4a16-b36f=testing-taint-value-6ad6f28d-bf0c-4275-87c2-6f083880b220:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-82f9cdd6-1fd8-449e-b915=testing-taint-value-e1e0a1a5-7e15-47e5-ba73-eb2274ac2cc3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-eea50006-590c-4084-91a4=testing-taint-value-058af5a8-2250-4071-b42f-6a7f3e766687:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e4296683-35db-4327-ae4b=testing-taint-value-ac8db5d8-37a2-4517-8cbc-4a68a6bda55f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-08517683-f1bd-438a-bb63=testing-taint-value-fae03ff2-21a4-4a59-8cfc-6956c3212348:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cd82c4a6-28d5-4da0-8314=testing-taint-value-fd1211da-02cf-4dd6-bfdd-30b957975288:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-764aee40-476e-4c3a-8757=testing-taint-value-52411b3d-6179-412e-b690-889925a050c1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d3145eb9-8205-4529-ae0d=testing-taint-value-3e3ddf65-3f81-43f0-bd24-235422b5bd7f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-66e628b9-c415-4805-844d=testing-taint-value-5f6edda2-3ae0-42fe-9b83-471d5a79ab83:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9fb079e7-4a70-4573-8d9e=testing-taint-value-53e6e91f-0213-4e5c-843a-a64f455ec4c4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dedadb0e-bfda-4f48-a539=testing-taint-value-0bc5ba45-5514-482b-bbcb-9c68e42dfba5:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-82f9cdd6-1fd8-449e-b915=testing-taint-value-e1e0a1a5-7e15-47e5-ba73-eb2274ac2cc3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-eea50006-590c-4084-91a4=testing-taint-value-058af5a8-2250-4071-b42f-6a7f3e766687:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e4296683-35db-4327-ae4b=testing-taint-value-ac8db5d8-37a2-4517-8cbc-4a68a6bda55f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-08517683-f1bd-438a-bb63=testing-taint-value-fae03ff2-21a4-4a59-8cfc-6956c3212348:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cd82c4a6-28d5-4da0-8314=testing-taint-value-fd1211da-02cf-4dd6-bfdd-30b957975288:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-764aee40-476e-4c3a-8757=testing-taint-value-52411b3d-6179-412e-b690-889925a050c1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d3145eb9-8205-4529-ae0d=testing-taint-value-3e3ddf65-3f81-43f0-bd24-235422b5bd7f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-66e628b9-c415-4805-844d=testing-taint-value-5f6edda2-3ae0-42fe-9b83-471d5a79ab83:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9fb079e7-4a70-4573-8d9e=testing-taint-value-53e6e91f-0213-4e5c-843a-a64f455ec4c4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dedadb0e-bfda-4f48-a539=testing-taint-value-0bc5ba45-5514-482b-bbcb-9c68e42dfba5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-154887ff-8a46-4e27-839a=testing-taint-value-c0acbe76-8f52-434c-ba87-422a7a6fbf7b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5c429fe4-d141-49c8-b091=testing-taint-value-6164de0f-6cda-4265-a35e-c0ded26de003:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bf438b8a-8b10-461d-89eb=testing-taint-value-d78c08a4-5aad-44c3-b2a6-a559ac1dcda2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b9340210-fa42-4182-9bfb=testing-taint-value-d31739a2-c4b0-4fe2-bde6-a0ec2f2e6cdd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4408bb3c-720f-4484-bfb7=testing-taint-value-88d992b1-8300-44d5-89ef-3287da3db01f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bda73033-0eee-40af-a4fe=testing-taint-value-e0e5a529-0670-4f1c-9962-e00001cd2f2d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0e4cc655-40b0-422a-ab72=testing-taint-value-afa62b87-abdd-4613-a8d1-73befa25a246:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-21863fa7-faf1-47e0-bd3e=testing-taint-value-f299319b-23aa-445d-87d9-61f4f1456039:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8ec87d34-3496-4b47-87b5=testing-taint-value-0cc55c76-cf73-40a6-8f21-eb71bd9e133c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7a6218ef-17a9-4a16-b36f=testing-taint-value-6ad6f28d-bf0c-4275-87c2-6f083880b220:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:06:03.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9755" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:76.574 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":13,"skipped":5652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 30 05:06:03.094: INFO: Running AfterSuite actions on all nodes Oct 30 05:06:03.094: INFO: Running AfterSuite actions on node 1 Oct 30 05:06:03.094: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 550.486 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 9m11.859445779s Test Suite Passed