I1106 01:36:59.932669 23 e2e.go:129] Starting e2e run "1ecd3db2-1abb-48a4-a87f-fe7678a72c01" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636162618 - Will randomize all specs Will run 13 of 5770 specs Nov 6 01:36:59.947: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:36:59.952: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 6 01:36:59.981: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:37:00.046: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:37:00.046: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:37:00.046: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:37:00.046: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:37:00.046: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 6 01:37:00.063: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 6 01:37:00.063: INFO: e2e test version: v1.21.5 Nov 6 01:37:00.064: INFO: kube-apiserver version: v1.21.1 Nov 6 01:37:00.064: INFO: >>> kubeConfig: /root/.kube/config Nov 6 01:37:00.071: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:37:00.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1106 01:37:00.107306 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 6 01:37:00.107: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 6 01:37:00.110: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:37:00.112: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:37:00.119: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:37:00.128: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:37:00.140: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:37:00.140: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:37:00.140: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:37:00.140: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:37:00.140: INFO: Container discover ready: false, restart count 0 Nov 6 01:37:00.140: INFO: Container init ready: false, restart count 0 Nov 6 01:37:00.140: INFO: Container install ready: false, restart count 0 Nov 6 01:37:00.140: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:37:00.140: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:37:00.140: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:37:00.140: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:37:00.140: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:37:00.140: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:37:00.140: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:37:00.140: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.140: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:37:00.140: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:37:00.140: INFO: Container collectd ready: true, restart count 0 Nov 6 01:37:00.140: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:37:00.140: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:37:00.141: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:37:00.141: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:37:00.141: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:37:00.141: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:37:00.141: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:37:00.141: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:37:00.141: INFO: Container grafana ready: true, restart count 0 Nov 6 01:37:00.141: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:37:00.141: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.141: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:37:00.141: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:37:00.150: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:37:00.150: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:37:00.150: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:37:00.150: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:37:00.150: INFO: Container discover ready: false, restart count 0 Nov 6 01:37:00.150: INFO: Container init ready: false, restart count 0 Nov 6 01:37:00.150: INFO: Container install ready: false, restart count 0 Nov 6 01:37:00.150: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.150: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:37:00.150: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.150: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:37:00.150: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.150: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:37:00.150: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.150: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:37:00.150: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.150: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:37:00.150: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.151: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:37:00.151: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:37:00.151: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:37:00.151: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:37:00.151: INFO: Container collectd ready: true, restart count 0 Nov 6 01:37:00.151: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:37:00.151: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:37:00.151: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:37:00.151: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:37:00.151: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:37:00.151: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:37:00.151: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:37:00.151: INFO: Container prometheus-operator ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Nov 6 01:37:00.192: INFO: Pod cmk-bnvd2 requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod cmk-cfm9r requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod cmk-webhook-6c9d5f8578-wq5mk requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod kube-flannel-cqj7j requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod kube-flannel-hxwks requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod kube-multus-ds-amd64-mqrl8 requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod kube-multus-ds-amd64-p7bxx requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod kube-proxy-j9lmg requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod kube-proxy-mc4cs requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod kubernetes-dashboard-785dcbb76d-9wtdz requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod kubernetes-metrics-scraper-5558854cb-v9vgg requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod node-feature-discovery-worker-pn6cr requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod node-feature-discovery-worker-spmbf requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod collectd-5k6s9 requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod collectd-r2g57 requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod node-exporter-fvksz requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod node-exporter-k7p79 requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Pod prometheus-operator-585ccfb458-vh55q requesting local ephemeral resource =0 on Node node2 Nov 6 01:37:00.192: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-qbp7s requesting local ephemeral resource =0 on Node node1 Nov 6 01:37:00.192: INFO: Using pod capacity: 40542413347 Nov 6 01:37:00.192: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Nov 6 01:37:00.192: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Nov 6 01:37:00.388: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b4d13bfc932299], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b4d13c7d99a82d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b4d13c9844a276], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 447.404708ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b4d13cba99ec07], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b4d13d39ca7c3a], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b4d13bfd0ca6ce], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b4d13df93e011c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b4d13e7472f46b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 2.067061895s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b4d13e7cc8321b], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b4d13e869927e5], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b4d13c0203ab19], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b4d13e74d305c0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b4d13e9da2c1d7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 684.696598ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b4d13ea4c5dc8f], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b4d13eaaed2ec7], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b4d13c0297d3f9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b4d13e7be0f6aa], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b4d13ed884a06c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.554223505s] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b4d13ee04d2575], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b4d13ee8d71509], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b4d13c0325f8ca], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b4d13e74d5869c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b4d13eb18a0b4c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.018457305s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b4d13eb825a504], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b4d13ebea6e267], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b4d13c03f676f4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b4d13e7785552b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b4d13ec5ede0c5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.315468712s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b4d13ecc891537], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b4d13ed376cc96], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b4d13c044d0603], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b4d13daf714d39], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b4d13dc8736ae2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 419.563132ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b4d13dce6b7ba5], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b4d13dd5ced55c], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b4d13c04db8a61], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b4d13d43104c6d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b4d13d5ea2b301], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 462.572921ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b4d13d7f5353e3], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b4d13d89039dcc], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b4d13c055d07f9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b4d13d83fd81a2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b4d13db4dca6b8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 819.919713ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b4d13dbb612947], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b4d13dc2c86639], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b4d13c05e8798e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b4d13e4122ac9a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b4d13e6844a927], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 656.533353ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b4d13e6f8fa926], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b4d13e77fb6742], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b4d13c066e995a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b4d13e408beb8d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b4d13e532f0f10], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 312.670605ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b4d13e5a84e82c], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b4d13e633e45be], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b4d13c06fc6879], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b4d13d800552e9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b4d13d9f40fabc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 523.994771ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b4d13db4b407f6], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b4d13dbea555a8], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b4d13bfdb5fd10], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b4d13dc06814d6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b4d13e47d10d85], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 2.271794801s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b4d13e81cbf508], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b4d13e88f1302a], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b4d13bfe39adf3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b4d13d619fa41b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b4d13d7b54b0a6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 431.289827ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b4d13d859b7f4a], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b4d13d90b239e4], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b4d13bfec286c6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b4d13c7ca8d3c2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b4d13c91179012], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 342.789695ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b4d13cb336c038], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b4d13d2b3a6231], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b4d13bff53b609], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-5 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b4d13ca98dcb06], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b4d13cbd025354], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 326.395864ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b4d13cd6d32e15], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b4d13d48441327], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b4d13bffdf4277], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b4d13dbf148cdd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b4d13e159bf74b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.451705679s] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b4d13e3a7444a5], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b4d13e7e899dd8], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b4d13c006b7472], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-7 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b4d13d7b59dfcd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b4d13d8dc5fb02], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 309.068566ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b4d13da76279fb], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b4d13db49365a5], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b4d13c00f8ce21], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b4d13d510c33b9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b4d13df9724706], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 2.825251623s] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b4d13e019e004e], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b4d13e37d91260], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b4d13c018bd525], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4775/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b4d13e3818776b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b4d13e8838a92f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.344280386s] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b4d13e90656bbf], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b4d13e9746acfc], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b4d13f894e933b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:37:16.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4775" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.409 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":1,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:37:16.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 6 01:37:16.515: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:38:16.569: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:38:16.571: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:38:16.591: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:38:16.591: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:38:16.592: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:38:16.592: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:38:16.606: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:38:16.606: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:38:16.606: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.606: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:38:16.606: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Nov 6 01:38:16.623: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:38:16.623: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:38:16.623: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:38:16.623: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:38:16.623: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 6 01:38:16.638: INFO: Waiting for running... Nov 6 01:38:16.639: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:38:21.714: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 6 01:38:21.714: INFO: Node: node1, totalRequestedMemResource: 1340355481600, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:38:21.714: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Pod for on the node: b710cd91-f6c5-4668-beeb-406f36432818-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:38:21.714: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 6 01:38:21.714: INFO: Node: node2, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884632576, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5f0560cc-c64b-4501-b69f=testing-taint-value-68f0278d-defd-4651-800e-fe62dc8f966a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b8b1c9bd-cd6c-4e59-b640=testing-taint-value-d6adfc81-af40-46c1-95c2-9113d8f43290:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-db43841c-aa3e-4f1b-9b92=testing-taint-value-3d4eb4be-7824-4ea6-b2ca-db17ef349c86:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c1d3c74c-1089-4e9a-a6f2=testing-taint-value-1b54d429-490b-45ae-bf7e-04b4c32cbb50:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ca11f0df-a34e-414e-9332=testing-taint-value-5adb5cbc-c5d1-465c-83fc-5a4e405c6e7f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-35d68876-7dc0-4129-bfbc=testing-taint-value-f7992889-1e88-4e52-824b-10d1c41ff7c6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4f7b1faa-3fa7-4428-9b0c=testing-taint-value-702bf033-ef58-47b2-ac0e-007a0e552c16:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fba4874a-fe42-4e14-b7df=testing-taint-value-840e8b35-1f76-4a5a-9e75-99e091274793:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b9691ebb-6bec-4a4b-9fe2=testing-taint-value-217fc850-1b66-4551-8d6f-c06e09c7513a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f7f5cfb7-6c7e-46dc-8a30=testing-taint-value-3c7069bb-4ddb-455b-9643-f0a26326ece7:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9975179d-1475-4b09-986b=testing-taint-value-324e861d-c7bc-45a4-b9d5-bbb6e441f335:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-cbebb0ad-fccd-4032-ae66=testing-taint-value-fecdf197-e574-4595-9b28-1acf16f8f33f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c7e407b5-3b83-470b-ad0f=testing-taint-value-3ba7cf79-db85-49cc-b398-c6aa62a3d1d7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-502a3a61-c9a7-491f-9ac2=testing-taint-value-8fae7fa9-1e17-4fd0-aafc-e4a50de961c1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7616f85b-53f9-4fcc-912b=testing-taint-value-5345f791-c567-46f9-9ce2-fca504653d5d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ec7872d6-354b-404d-ae22=testing-taint-value-47aae8ac-8b30-440e-bcc1-d5714638dd78:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-377dbbbb-33fc-4097-b351=testing-taint-value-4035fc80-a017-43e1-9e58-7e2caefb0165:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bd803eaf-2af1-46f7-93f7=testing-taint-value-7b7599f1-c1b9-49d5-a0c7-6dac32f6f46a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-60c2203b-b501-4181-8b83=testing-taint-value-031bb0d6-3cc9-46ee-ab54-23c395dc1aa0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-68254da0-e11b-49f3-90e3=testing-taint-value-d70e6245-240c-4163-9dd4-b45ba7ff1924:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9975179d-1475-4b09-986b=testing-taint-value-324e861d-c7bc-45a4-b9d5-bbb6e441f335:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-cbebb0ad-fccd-4032-ae66=testing-taint-value-fecdf197-e574-4595-9b28-1acf16f8f33f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c7e407b5-3b83-470b-ad0f=testing-taint-value-3ba7cf79-db85-49cc-b398-c6aa62a3d1d7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-502a3a61-c9a7-491f-9ac2=testing-taint-value-8fae7fa9-1e17-4fd0-aafc-e4a50de961c1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7616f85b-53f9-4fcc-912b=testing-taint-value-5345f791-c567-46f9-9ce2-fca504653d5d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ec7872d6-354b-404d-ae22=testing-taint-value-47aae8ac-8b30-440e-bcc1-d5714638dd78:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-377dbbbb-33fc-4097-b351=testing-taint-value-4035fc80-a017-43e1-9e58-7e2caefb0165:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bd803eaf-2af1-46f7-93f7=testing-taint-value-7b7599f1-c1b9-49d5-a0c7-6dac32f6f46a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-60c2203b-b501-4181-8b83=testing-taint-value-031bb0d6-3cc9-46ee-ab54-23c395dc1aa0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-68254da0-e11b-49f3-90e3=testing-taint-value-d70e6245-240c-4163-9dd4-b45ba7ff1924:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5f0560cc-c64b-4501-b69f=testing-taint-value-68f0278d-defd-4651-800e-fe62dc8f966a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b8b1c9bd-cd6c-4e59-b640=testing-taint-value-d6adfc81-af40-46c1-95c2-9113d8f43290:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-db43841c-aa3e-4f1b-9b92=testing-taint-value-3d4eb4be-7824-4ea6-b2ca-db17ef349c86:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c1d3c74c-1089-4e9a-a6f2=testing-taint-value-1b54d429-490b-45ae-bf7e-04b4c32cbb50:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ca11f0df-a34e-414e-9332=testing-taint-value-5adb5cbc-c5d1-465c-83fc-5a4e405c6e7f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-35d68876-7dc0-4129-bfbc=testing-taint-value-f7992889-1e88-4e52-824b-10d1c41ff7c6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4f7b1faa-3fa7-4428-9b0c=testing-taint-value-702bf033-ef58-47b2-ac0e-007a0e552c16:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fba4874a-fe42-4e14-b7df=testing-taint-value-840e8b35-1f76-4a5a-9e75-99e091274793:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b9691ebb-6bec-4a4b-9fe2=testing-taint-value-217fc850-1b66-4551-8d6f-c06e09c7513a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f7f5cfb7-6c7e-46dc-8a30=testing-taint-value-3c7069bb-4ddb-455b-9643-f0a26326ece7:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:38:31.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5102" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:74.574 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":2,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:38:31.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:38:31.088: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:38:31.097: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:38:31.099: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:38:31.106: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:38:31.106: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:38:31.106: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:38:31.106: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:38:31.106: INFO: Container discover ready: false, restart count 0 Nov 6 01:38:31.106: INFO: Container init ready: false, restart count 0 Nov 6 01:38:31.106: INFO: Container install ready: false, restart count 0 Nov 6 01:38:31.106: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.106: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:38:31.106: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.106: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:38:31.106: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:38:31.107: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:38:31.107: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:38:31.107: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:38:31.107: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:38:31.107: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:38:31.107: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:38:31.107: INFO: Container collectd ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:38:31.107: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:38:31.107: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:38:31.107: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:38:31.107: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Container grafana ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:38:31.107: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:38:31.107: INFO: with-tolerations from sched-priority-5102 started at 2021-11-06 01:38:22 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.107: INFO: Container with-tolerations ready: true, restart count 0 Nov 6 01:38:31.107: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:38:31.116: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:38:31.116: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:38:31.116: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:38:31.116: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:38:31.116: INFO: Container discover ready: false, restart count 0 Nov 6 01:38:31.116: INFO: Container init ready: false, restart count 0 Nov 6 01:38:31.116: INFO: Container install ready: false, restart count 0 Nov 6 01:38:31.116: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:38:31.116: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:38:31.116: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:38:31.116: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:38:31.116: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:38:31.116: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:38:31.116: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:38:31.116: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:38:31.116: INFO: Container collectd ready: true, restart count 0 Nov 6 01:38:31.116: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:38:31.116: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:38:31.116: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:38:31.116: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:38:31.116: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:38:31.116: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:38:31.116: INFO: Container prometheus-operator ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b4d151288ce84e], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:38:32.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2934" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":3,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:38:32.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 6 01:38:32.194: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:39:32.250: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node1. STEP: Apply 10 fake resource to node node2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:40:12.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9131" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:100.426 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":4,"skipped":992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:40:12.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:40:12.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:40:12.629: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:40:12.631: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:40:12.641: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:40:12.641: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:40:12.641: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:40:12.641: INFO: Container discover ready: false, restart count 0 Nov 6 01:40:12.641: INFO: Container init ready: false, restart count 0 Nov 6 01:40:12.641: INFO: Container install ready: false, restart count 0 Nov 6 01:40:12.641: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:40:12.641: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:40:12.641: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:40:12.641: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:40:12.641: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:40:12.641: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:40:12.641: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:40:12.641: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.641: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:40:12.641: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:40:12.641: INFO: Container collectd ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:40:12.641: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:40:12.641: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:40:12.641: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:40:12.641: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container grafana ready: true, restart count 0 Nov 6 01:40:12.641: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:40:12.641: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.642: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:40:12.642: INFO: low-1 from sched-preemption-9131 started at 2021-11-06 01:39:50 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.642: INFO: Container low-1 ready: true, restart count 0 Nov 6 01:40:12.642: INFO: medium from sched-preemption-9131 started at 2021-11-06 01:40:08 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.642: INFO: Container medium ready: true, restart count 0 Nov 6 01:40:12.642: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:40:12.648: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:40:12.649: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:40:12.649: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:40:12.649: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:40:12.649: INFO: Container discover ready: false, restart count 0 Nov 6 01:40:12.649: INFO: Container init ready: false, restart count 0 Nov 6 01:40:12.649: INFO: Container install ready: false, restart count 0 Nov 6 01:40:12.649: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:40:12.649: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:40:12.649: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:40:12.649: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:40:12.649: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:40:12.649: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:40:12.649: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:40:12.649: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:40:12.649: INFO: Container collectd ready: true, restart count 0 Nov 6 01:40:12.649: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:40:12.649: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:40:12.649: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:40:12.649: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:40:12.649: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:40:12.649: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:40:12.649: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:40:12.649: INFO: high from sched-preemption-9131 started at 2021-11-06 01:39:44 +0000 UTC (1 container statuses recorded) Nov 6 01:40:12.649: INFO: Container high ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:40:26.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4322" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.181 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":5,"skipped":1734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:40:26.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:40:26.813: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:40:26.820: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:40:26.824: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:40:26.843: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:40:26.843: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:40:26.843: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:40:26.843: INFO: Container discover ready: false, restart count 0 Nov 6 01:40:26.843: INFO: Container init ready: false, restart count 0 Nov 6 01:40:26.843: INFO: Container install ready: false, restart count 0 Nov 6 01:40:26.843: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:40:26.843: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:40:26.843: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:40:26.843: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:40:26.843: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:40:26.843: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:40:26.843: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:40:26.843: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:40:26.843: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:40:26.843: INFO: Container collectd ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:40:26.843: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:40:26.843: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:40:26.843: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:40:26.843: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container grafana ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:40:26.843: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:40:26.843: INFO: rs-e2e-pts-filter-8rn42 from sched-pred-4322 started at 2021-11-06 01:40:20 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 6 01:40:26.843: INFO: rs-e2e-pts-filter-txmbm from sched-pred-4322 started at 2021-11-06 01:40:20 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.843: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 6 01:40:26.843: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:40:26.864: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:40:26.864: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:40:26.864: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:40:26.864: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:40:26.864: INFO: Container discover ready: false, restart count 0 Nov 6 01:40:26.864: INFO: Container init ready: false, restart count 0 Nov 6 01:40:26.864: INFO: Container install ready: false, restart count 0 Nov 6 01:40:26.864: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:40:26.864: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:40:26.864: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:40:26.864: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:40:26.864: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:40:26.864: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:40:26.864: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:40:26.864: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:40:26.864: INFO: Container collectd ready: true, restart count 0 Nov 6 01:40:26.864: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:40:26.864: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:40:26.864: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:40:26.864: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:40:26.864: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:40:26.864: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:40:26.864: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:40:26.864: INFO: rs-e2e-pts-filter-b9c5l from sched-pred-4322 started at 2021-11-06 01:40:20 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 6 01:40:26.864: INFO: rs-e2e-pts-filter-fh9mg from sched-pred-4322 started at 2021-11-06 01:40:20 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 6 01:40:26.864: INFO: high from sched-preemption-9131 started at 2021-11-06 01:39:44 +0000 UTC (1 container statuses recorded) Nov 6 01:40:26.864: INFO: Container high ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1b468ed6-a768-4540-9c2f-4bc1d9827835 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-1b468ed6-a768-4540-9c2f-4bc1d9827835 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1b468ed6-a768-4540-9c2f-4bc1d9827835 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:40:42.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8731" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.212 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":6,"skipped":2488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:40:43.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 6 01:40:43.035: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:41:43.087: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:41:43.089: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:41:43.109: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:41:43.109: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:41:43.109: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:41:43.109: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:41:43.123: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:41:43.123: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.123: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.123: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:41:43.124: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:41:43.124: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:41:43.124: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:41:43.124: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Nov 6 01:41:47.175: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:41:47.175: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:41:47.175: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.175: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.176: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.176: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.176: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.176: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.176: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:47.176: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:41:47.176: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 6 01:41:47.187: INFO: Waiting for running... Nov 6 01:41:47.191: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:41:52.259: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:41:52.259: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:41:52.259: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 6 01:41:52.259: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:41:52.259: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:42:10.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8489" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:87.303 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":7,"skipped":3232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:42:10.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:42:10.340: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:42:10.349: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:42:10.351: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:42:10.367: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:42:10.367: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:42:10.367: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:42:10.367: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:42:10.367: INFO: Container discover ready: false, restart count 0 Nov 6 01:42:10.367: INFO: Container init ready: false, restart count 0 Nov 6 01:42:10.367: INFO: Container install ready: false, restart count 0 Nov 6 01:42:10.367: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.367: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:42:10.367: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.367: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:42:10.367: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.367: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:42:10.367: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:42:10.368: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:42:10.368: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:42:10.368: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:42:10.368: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:42:10.368: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:42:10.368: INFO: Container collectd ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:42:10.368: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:42:10.368: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:42:10.368: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:42:10.368: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Container grafana ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:42:10.368: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:42:10.368: INFO: pod-with-pod-antiaffinity from sched-priority-8489 started at 2021-11-06 01:41:52 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.368: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Nov 6 01:42:10.368: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:42:10.375: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:42:10.375: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:42:10.375: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:42:10.375: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:42:10.375: INFO: Container discover ready: false, restart count 0 Nov 6 01:42:10.375: INFO: Container init ready: false, restart count 0 Nov 6 01:42:10.375: INFO: Container install ready: false, restart count 0 Nov 6 01:42:10.375: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:42:10.375: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:42:10.375: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:42:10.375: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:42:10.375: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:42:10.375: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:42:10.375: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:42:10.375: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:42:10.375: INFO: Container collectd ready: true, restart count 0 Nov 6 01:42:10.375: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:42:10.375: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:42:10.375: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:42:10.375: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:42:10.375: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:42:10.375: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:42:10.375: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:42:10.375: INFO: pod-with-label-security-s1 from sched-priority-8489 started at 2021-11-06 01:41:43 +0000 UTC (1 container statuses recorded) Nov 6 01:42:10.375: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-98d57ab7-ff76-4ecd-97e1-5c327dfd11fc=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-2c7a4071-f809-4cd4-a6a0-8bb56e97c5ea testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d18434a78ce3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5921/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d1848d3ce7d1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d1849f6c3d43], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 305.084896ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d184a62f6328], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d184acf299a0], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d185242cfabe], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b4d18526403a90], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-98d57ab7-ff76-4ecd-97e1-5c327dfd11fc: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b4d18526403a90], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-98d57ab7-ff76-4ecd-97e1-5c327dfd11fc: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d18434a78ce3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5921/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d1848d3ce7d1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d1849f6c3d43], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 305.084896ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d184a62f6328], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d184acf299a0], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b4d185242cfabe], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-98d57ab7-ff76-4ecd-97e1-5c327dfd11fc=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b4d1858bb1a11d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5921/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-2c7a4071-f809-4cd4-a6a0-8bb56e97c5ea off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-2c7a4071-f809-4cd4-a6a0-8bb56e97c5ea STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-98d57ab7-ff76-4ecd-97e1-5c327dfd11fc=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:42:16.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5921" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.181 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":8,"skipped":3508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:42:16.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:42:16.522: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:42:16.529: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:42:16.532: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:42:16.540: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:42:16.540: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:42:16.540: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:42:16.540: INFO: Container discover ready: false, restart count 0 Nov 6 01:42:16.540: INFO: Container init ready: false, restart count 0 Nov 6 01:42:16.540: INFO: Container install ready: false, restart count 0 Nov 6 01:42:16.540: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:42:16.540: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:42:16.540: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:42:16.540: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:42:16.540: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:42:16.540: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:42:16.540: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:42:16.540: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:42:16.540: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:42:16.540: INFO: Container collectd ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:42:16.540: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:42:16.540: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:42:16.540: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:42:16.540: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container grafana ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:42:16.540: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:42:16.540: INFO: pod-with-pod-antiaffinity from sched-priority-8489 started at 2021-11-06 01:41:52 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.540: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Nov 6 01:42:16.540: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:42:16.560: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:42:16.560: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:42:16.560: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:42:16.560: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:42:16.560: INFO: Container discover ready: false, restart count 0 Nov 6 01:42:16.560: INFO: Container init ready: false, restart count 0 Nov 6 01:42:16.560: INFO: Container install ready: false, restart count 0 Nov 6 01:42:16.560: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:42:16.560: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:42:16.560: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:42:16.560: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:42:16.560: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:42:16.560: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:42:16.560: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:42:16.560: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:42:16.560: INFO: Container collectd ready: true, restart count 0 Nov 6 01:42:16.560: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:42:16.560: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:42:16.560: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:42:16.560: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:42:16.560: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:42:16.560: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:42:16.560: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:42:16.561: INFO: still-no-tolerations from sched-pred-5921 started at 2021-11-06 01:42:16 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.561: INFO: Container still-no-tolerations ready: false, restart count 0 Nov 6 01:42:16.561: INFO: pod-with-label-security-s1 from sched-priority-8489 started at 2021-11-06 01:41:43 +0000 UTC (1 container statuses recorded) Nov 6 01:42:16.561: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d.16b4d1869747230d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d.16b4d186f1940b58], Reason = [Scheduled], Message = [Successfully assigned sched-pred-232/filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d.16b4d187460be814], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d.16b4d187599a5dc5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 328.089038ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d.16b4d1875f78ef8a], Reason = [Created], Message = [Created container filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d] STEP: Considering event: Type = [Normal], Name = [filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d.16b4d18765c21f7b], Reason = [Started], Message = [Started container filler-pod-66253584-aab0-4053-ba7e-811c43d6e91d] STEP: Considering event: Type = [Normal], Name = [without-label.16b4d185a6cac4b5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-232/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16b4d185fd568ec7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b4d1860f7568e0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 303.996502ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b4d186157d1eb2], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b4d1861d630903], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b4d18696811a78], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod11edb048-2b7b-450d-abd6-8e4ae4cf63af.16b4d187fea78160], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:42:27.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-232" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.192 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":9,"skipped":3649,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:42:27.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 6 01:42:27.720: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:43:27.783: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:43:27.785: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:43:27.802: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:43:27.802: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:43:27.802: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:43:27.802: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:43:27.819: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:43:27.819: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:43:27.819: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:27.819: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:43:27.819: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Nov 6 01:43:35.918: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:43:35.918: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 6 01:43:35.918: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:43:35.918: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:43:35.918: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:43:35.929: INFO: Waiting for running... Nov 6 01:43:35.932: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:43:41.002: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 6 01:43:41.002: INFO: Node: node2, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:43:41.002: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Pod for on the node: c363c016-06ae-435f-b4c9-87cbb8cf1dd5-0, Cpu: 38400, Mem: 89350039552 Nov 6 01:43:41.002: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 6 01:43:41.002: INFO: Node: node1, totalRequestedMemResource: 1340355450880, memAllocatableVal: 178884628480, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:43:59.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9927" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:91.390 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":10,"skipped":3859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:43:59.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:43:59.129: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:43:59.137: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:43:59.142: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:43:59.150: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:43:59.150: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:43:59.150: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:43:59.150: INFO: Container discover ready: false, restart count 0 Nov 6 01:43:59.150: INFO: Container init ready: false, restart count 0 Nov 6 01:43:59.150: INFO: Container install ready: false, restart count 0 Nov 6 01:43:59.150: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:43:59.150: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:43:59.150: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:43:59.150: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:43:59.150: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:43:59.150: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:43:59.150: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:43:59.150: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:43:59.150: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:43:59.150: INFO: Container collectd ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:43:59.150: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:43:59.150: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:43:59.150: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:43:59.150: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container grafana ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:43:59.150: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:43:59.150: INFO: test-pod from sched-priority-9927 started at 2021-11-06 01:43:49 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.150: INFO: Container test-pod ready: true, restart count 0 Nov 6 01:43:59.150: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:43:59.170: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:43:59.170: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:43:59.170: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:43:59.170: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:43:59.170: INFO: Container discover ready: false, restart count 0 Nov 6 01:43:59.170: INFO: Container init ready: false, restart count 0 Nov 6 01:43:59.170: INFO: Container install ready: false, restart count 0 Nov 6 01:43:59.170: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.170: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:43:59.170: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.170: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:43:59.170: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:43:59.171: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:43:59.171: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:43:59.171: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:43:59.171: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:43:59.171: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:43:59.171: INFO: Container collectd ready: true, restart count 0 Nov 6 01:43:59.171: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:43:59.171: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:43:59.171: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:43:59.171: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:43:59.171: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:43:59.171: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:43:59.171: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:43:59.171: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:43:59.171: INFO: rs-e2e-pts-score-2xs7f from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container e2e-pts-score ready: true, restart count 0 Nov 6 01:43:59.171: INFO: rs-e2e-pts-score-87rpv from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container e2e-pts-score ready: true, restart count 0 Nov 6 01:43:59.171: INFO: rs-e2e-pts-score-kq9qw from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container e2e-pts-score ready: true, restart count 0 Nov 6 01:43:59.171: INFO: rs-e2e-pts-score-ntv8r from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:43:59.171: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-12549b0a-9f48-4c1c-bcd1-f9365daeba8e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-12549b0a-9f48-4c1c-bcd1-f9365daeba8e off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-12549b0a-9f48-4c1c-bcd1-f9365daeba8e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:44:11.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6242" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.159 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":11,"skipped":5160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:44:11.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 6 01:44:11.297: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 6 01:44:11.306: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:44:11.309: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 6 01:44:11.319: INFO: cmk-cfm9r from kube-system started at 2021-11-05 21:13:47 +0000 UTC (2 container statuses recorded) Nov 6 01:44:11.319: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:44:11.319: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:44:11.319: INFO: cmk-init-discover-node1-nnkks from kube-system started at 2021-11-05 21:13:04 +0000 UTC (3 container statuses recorded) Nov 6 01:44:11.319: INFO: Container discover ready: false, restart count 0 Nov 6 01:44:11.319: INFO: Container init ready: false, restart count 0 Nov 6 01:44:11.319: INFO: Container install ready: false, restart count 0 Nov 6 01:44:11.319: INFO: cmk-webhook-6c9d5f8578-wq5mk from kube-system started at 2021-11-05 21:13:47 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container cmk-webhook ready: true, restart count 0 Nov 6 01:44:11.320: INFO: kube-flannel-hxwks from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container kube-flannel ready: true, restart count 3 Nov 6 01:44:11.320: INFO: kube-multus-ds-amd64-mqrl8 from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:44:11.320: INFO: kube-proxy-mc4cs from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:44:11.320: INFO: kubernetes-dashboard-785dcbb76d-9wtdz from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 6 01:44:11.320: INFO: nginx-proxy-node1 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:44:11.320: INFO: node-feature-discovery-worker-spmbf from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:44:11.320: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-l4npn from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:44:11.320: INFO: collectd-5k6s9 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:44:11.320: INFO: Container collectd ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:44:11.320: INFO: node-exporter-fvksz from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:44:11.320: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:44:11.320: INFO: prometheus-k8s-0 from monitoring started at 2021-11-05 21:14:58 +0000 UTC (4 container statuses recorded) Nov 6 01:44:11.320: INFO: Container config-reloader ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Container grafana ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Container prometheus ready: true, restart count 1 Nov 6 01:44:11.320: INFO: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s from monitoring started at 2021-11-05 21:17:51 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.320: INFO: Container tas-extender ready: true, restart count 0 Nov 6 01:44:11.320: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 6 01:44:11.330: INFO: cmk-bnvd2 from kube-system started at 2021-11-05 21:13:46 +0000 UTC (2 container statuses recorded) Nov 6 01:44:11.330: INFO: Container nodereport ready: true, restart count 0 Nov 6 01:44:11.330: INFO: Container reconcile ready: true, restart count 0 Nov 6 01:44:11.330: INFO: cmk-init-discover-node2-9svdd from kube-system started at 2021-11-05 21:13:24 +0000 UTC (3 container statuses recorded) Nov 6 01:44:11.330: INFO: Container discover ready: false, restart count 0 Nov 6 01:44:11.330: INFO: Container init ready: false, restart count 0 Nov 6 01:44:11.330: INFO: Container install ready: false, restart count 0 Nov 6 01:44:11.330: INFO: kube-flannel-cqj7j from kube-system started at 2021-11-05 21:01:36 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kube-flannel ready: true, restart count 2 Nov 6 01:44:11.330: INFO: kube-multus-ds-amd64-p7bxx from kube-system started at 2021-11-05 21:01:44 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kube-multus ready: true, restart count 1 Nov 6 01:44:11.330: INFO: kube-proxy-j9lmg from kube-system started at 2021-11-05 21:00:42 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kube-proxy ready: true, restart count 2 Nov 6 01:44:11.330: INFO: kubernetes-metrics-scraper-5558854cb-v9vgg from kube-system started at 2021-11-05 21:02:14 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 6 01:44:11.330: INFO: nginx-proxy-node2 from kube-system started at 2021-11-05 21:00:39 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container nginx-proxy ready: true, restart count 2 Nov 6 01:44:11.330: INFO: node-feature-discovery-worker-pn6cr from kube-system started at 2021-11-05 21:09:34 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container nfd-worker ready: true, restart count 0 Nov 6 01:44:11.330: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tzh4p from kube-system started at 2021-11-05 21:10:45 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 6 01:44:11.330: INFO: collectd-r2g57 from monitoring started at 2021-11-05 21:18:40 +0000 UTC (3 container statuses recorded) Nov 6 01:44:11.330: INFO: Container collectd ready: true, restart count 0 Nov 6 01:44:11.330: INFO: Container collectd-exporter ready: true, restart count 0 Nov 6 01:44:11.330: INFO: Container rbac-proxy ready: true, restart count 0 Nov 6 01:44:11.330: INFO: node-exporter-k7p79 from monitoring started at 2021-11-05 21:14:48 +0000 UTC (2 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:44:11.330: INFO: Container node-exporter ready: true, restart count 0 Nov 6 01:44:11.330: INFO: prometheus-operator-585ccfb458-vh55q from monitoring started at 2021-11-05 21:14:41 +0000 UTC (2 container statuses recorded) Nov 6 01:44:11.330: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 6 01:44:11.330: INFO: Container prometheus-operator ready: true, restart count 0 Nov 6 01:44:11.330: INFO: with-labels from sched-pred-6242 started at 2021-11-06 01:44:03 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.330: INFO: Container with-labels ready: true, restart count 0 Nov 6 01:44:11.331: INFO: rs-e2e-pts-score-2xs7f from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.331: INFO: Container e2e-pts-score ready: false, restart count 0 Nov 6 01:44:11.331: INFO: rs-e2e-pts-score-87rpv from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.331: INFO: Container e2e-pts-score ready: false, restart count 0 Nov 6 01:44:11.331: INFO: rs-e2e-pts-score-kq9qw from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.331: INFO: Container e2e-pts-score ready: false, restart count 0 Nov 6 01:44:11.331: INFO: rs-e2e-pts-score-ntv8r from sched-priority-9927 started at 2021-11-06 01:43:41 +0000 UTC (1 container statuses recorded) Nov 6 01:44:11.331: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-dc63506d-f109-4fd7-b7bb-d36d928cf980=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-3c6d7737-c145-407b-9e5d-c2f2ce921fb5 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-3c6d7737-c145-407b-9e5d-c2f2ce921fb5 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-3c6d7737-c145-407b-9e5d-c2f2ce921fb5 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-dc63506d-f109-4fd7-b7bb-d36d928cf980=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:44:21.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6888" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.174 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":12,"skipped":5415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 6 01:44:21.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 6 01:44:21.464: INFO: Waiting up to 1m0s for all nodes to be ready Nov 6 01:45:21.515: INFO: Waiting for terminating namespaces to be deleted... Nov 6 01:45:21.517: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 6 01:45:21.535: INFO: The status of Pod cmk-init-discover-node1-nnkks is Succeeded, skipping waiting Nov 6 01:45:21.535: INFO: The status of Pod cmk-init-discover-node2-9svdd is Succeeded, skipping waiting Nov 6 01:45:21.535: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 6 01:45:21.535: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 6 01:45:21.552: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.552: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:45:21.552: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:45:21.553: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.553: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:45:21.553: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Nov 6 01:45:21.570: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:45:21.570: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 6 01:45:21.570: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-qbp7s, Cpu: 100, Mem: 209715200 Nov 6 01:45:21.570: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 6 01:45:21.570: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 6 01:45:21.586: INFO: Waiting for running... Nov 6 01:45:21.588: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:45:26.669: INFO: ComputeCPUMemFraction for node: node1 Nov 6 01:45:26.669: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Node: node1, totalRequestedCPUResource: 576100, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 6 01:45:26.670: INFO: Node: node1, totalRequestedMemResource: 1340355481600, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 6 01:45:26.670: INFO: ComputeCPUMemFraction for node: node2 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Pod for on the node: 2a391c30-e78b-4a2b-8e7e-ea3a7abce7ed-0, Cpu: 38400, Mem: 89350041600 Nov 6 01:45:26.670: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 6 01:45:26.670: INFO: Node: node2, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884632576, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7644 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7644, will wait for the garbage collector to delete the pods Nov 6 01:45:32.852: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.39723ms Nov 6 01:45:32.952: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.645089ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 6 01:45:49.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7644" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:87.941 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":13,"skipped":5450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 6 01:45:49.386: INFO: Running AfterSuite actions on all nodes Nov 6 01:45:49.386: INFO: Running AfterSuite actions on node 1 Nov 6 01:45:49.386: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 529.444 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 8m50.804826913s Test Suite Passed