I0508 01:48:35.239614 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0508 01:48:35.239772 22 e2e.go:129] Starting e2e run "c84f585c-a053-4282-ab48-fbee724551af" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620438514 - Will randomize all specs Will run 12 of 5484 specs May 8 01:48:35.253: INFO: >>> kubeConfig: /root/.kube/config May 8 01:48:35.257: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 8 01:48:35.286: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 01:48:35.351: INFO: The status of Pod cmk-init-discover-node2-kd9gg is Succeeded, skipping waiting May 8 01:48:35.351: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 01:48:35.351: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 8 01:48:35.351: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 8 01:48:35.368: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 8 01:48:35.368: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 8 01:48:35.368: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 8 01:48:35.368: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 8 01:48:35.368: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 8 01:48:35.368: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 8 01:48:35.368: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 8 01:48:35.368: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 8 01:48:35.368: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 8 01:48:35.368: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 8 01:48:35.368: INFO: e2e test version: v1.19.10 May 8 01:48:35.368: INFO: kube-apiserver version: v1.19.8 May 8 01:48:35.368: INFO: >>> kubeConfig: /root/.kube/config May 8 01:48:35.372: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:48:35.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption May 8 01:48:35.396: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 8 01:48:35.400: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 8 01:48:35.410: INFO: Waiting up to 1m0s for all nodes to be ready May 8 01:49:35.464: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node1. STEP: Apply 10 fake resource to node node2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:50:15.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3925" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:100.380 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":1,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:50:15.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:50:15.785: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:50:15.792: INFO: Waiting for terminating namespaces to be deleted... May 8 01:50:15.794: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:50:15.805: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:50:15.805: INFO: Container nodereport ready: true, restart count 0 May 8 01:50:15.805: INFO: Container reconcile ready: true, restart count 0 May 8 01:50:15.805: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:50:15.805: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container kube-multus ready: true, restart count 1 May 8 01:50:15.805: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:50:15.805: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:50:15.805: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:50:15.805: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:50:15.805: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:50:15.805: INFO: Container collectd ready: true, restart count 0 May 8 01:50:15.805: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:50:15.805: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:50:15.805: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:50:15.805: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:50:15.805: INFO: Container node-exporter ready: true, restart count 0 May 8 01:50:15.805: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:50:15.805: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:50:15.805: INFO: Container grafana ready: true, restart count 0 May 8 01:50:15.805: INFO: Container prometheus ready: true, restart count 22 May 8 01:50:15.805: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:50:15.805: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:50:15.805: INFO: high from sched-preemption-3925 started at 2021-05-08 01:49:45 +0000 UTC (1 container statuses recorded) May 8 01:50:15.805: INFO: Container high ready: true, restart count 0 May 8 01:50:15.805: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:50:15.812: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:50:15.812: INFO: Container nodereport ready: true, restart count 0 May 8 01:50:15.812: INFO: Container reconcile ready: true, restart count 0 May 8 01:50:15.812: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:50:15.812: INFO: Container discover ready: false, restart count 0 May 8 01:50:15.812: INFO: Container init ready: false, restart count 0 May 8 01:50:15.812: INFO: Container install ready: false, restart count 0 May 8 01:50:15.812: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:50:15.812: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:50:15.812: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container kube-multus ready: true, restart count 1 May 8 01:50:15.812: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:50:15.812: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:50:15.812: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:50:15.812: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:50:15.812: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:50:15.812: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:50:15.812: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:50:15.812: INFO: Container collectd ready: true, restart count 0 May 8 01:50:15.812: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:50:15.812: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:50:15.812: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:50:15.812: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:50:15.812: INFO: Container node-exporter ready: true, restart count 0 May 8 01:50:15.812: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:50:15.813: INFO: Container tas-controller ready: true, restart count 0 May 8 01:50:15.813: INFO: Container tas-extender ready: true, restart count 0 May 8 01:50:15.813: INFO: low-1 from sched-preemption-3925 started at 2021-05-08 01:49:49 +0000 UTC (1 container statuses recorded) May 8 01:50:15.813: INFO: Container low-1 ready: true, restart count 0 May 8 01:50:15.813: INFO: medium from sched-preemption-3925 started at 2021-05-08 01:50:03 +0000 UTC (1 container statuses recorded) May 8 01:50:15.813: INFO: Container medium ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 8 01:50:15.848: INFO: Pod cmk-gvh7j requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod cmk-qzhwr requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod cmk-webhook-6c9d5f8578-94s58 requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod kube-flannel-htqkx requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod kube-flannel-qm7lv requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod kube-multus-ds-amd64-fxgdb requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod kube-multus-ds-amd64-g98hm requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod kube-proxy-bms7z requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod kube-proxy-rgw7h requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod kubernetes-dashboard-86c6f9df5b-k9cj2 requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod node-feature-discovery-worker-t66pk requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod node-feature-discovery-worker-wp5n6 requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod collectd-h2lg2 requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod collectd-p5gbt requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod node-exporter-4bcls requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod node-exporter-qv7mz requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-8z46f requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod high requesting local ephemeral resource =0 on Node node1 May 8 01:50:15.848: INFO: Pod low-1 requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Pod medium requesting local ephemeral resource =0 on Node node2 May 8 01:50:15.848: INFO: Using pod capacity: 40542413347 May 8 01:50:15.848: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 May 8 01:50:15.848: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 8 01:50:16.041: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.167cf455eea648da], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-0 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167cf4581385c70a], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.88/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167cf458144b3550], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167cf458a96a237a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.501824275s] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167cf458afe80305], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167cf458b57ed4d4], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167cf455ef2a2882], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167cf4581338efdf], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.87/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167cf45814182f4c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167cf4585041b4ee], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.009339395s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167cf45856b0d162], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167cf4585cd20e22], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167cf455f4521393], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167cf458237b1347], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.119/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167cf458244c6ffe], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167cf458b463b3e2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.417434339s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167cf458bb1b7009], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167cf458c17e35bf], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167cf455f4df3c96], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167cf4576218f677], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.82/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167cf45780d59a6e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167cf4579eed2b72], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 504.852045ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167cf457c930d8a5], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167cf45816dd27a6], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167cf455f561e293], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167cf45822f6e19f], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.123/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167cf458240f58a8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167cf4589a453516], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.983232538s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167cf458a0bec0af], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167cf458a6e4d616], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167cf455f5fc0a47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167cf4581338638c], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.90/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167cf4581425d3f7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167cf4586c58ee09], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.479735938s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167cf458734f3004], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167cf4587c2349f2], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167cf455f69c0e5b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167cf457c07ab2b2], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.116/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167cf4580455e9b9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167cf45824b70573], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 543.221638ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167cf45831631dac], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167cf458470c3c65], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167cf455f721b5be], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167cf45805b0b68f], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.83/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167cf4581335b302], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167cf4583263b2df], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 523.097529ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167cf45838a2a926], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167cf4583e719ae0], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167cf455f7ad56a6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167cf458137a1280], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.86/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167cf4581444c17d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167cf4588a2350e2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.977512956s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167cf45891226032], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167cf4589761c944], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167cf455f83dc782], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167cf458137fae43], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.85/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167cf458146a5dd0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167cf458c79669c6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.00600047s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167cf458cf0d52c1], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167cf458d53d8226], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167cf455f8cb21a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-18 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167cf4581e807621], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.118/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167cf45822c74994], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167cf4585e4018f0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 997.764919ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167cf45864e9a164], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167cf4586ad88ee0], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167cf455f9660c53], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-19 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167cf45823394165], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.122/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167cf45823f3b22f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167cf4587d100641], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.495007575s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167cf45883dd9463], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167cf45889d80e09], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167cf455efb6e4a3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167cf45813b59a4d], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.84/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167cf458150a2128], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167cf458e4f2b0a8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.488112243s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167cf458ec93c609], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167cf458f25edffe], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167cf455f04e2fe1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167cf458152b1ba6], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.91/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167cf45815d93e86], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167cf4591fb25f73], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 4.460181347s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167cf459260a1b27], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167cf4592c04eddc], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167cf455f0dd8958], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167cf458049b196f], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.117/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167cf45808923287], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167cf45841f712de], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 962.896781ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167cf4584c0e4b4c], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167cf458554798d7], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167cf455f170cd50], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-5 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167cf45814a92aeb], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.89/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167cf458154e9bce], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167cf45903344384], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.991221863s] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167cf4590a4d786b], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167cf459104414d0], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167cf455f2187cf6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167cf458236dbf87], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.120/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167cf458245398b6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167cf458eec81012], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.396623159s] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167cf458f5067f9f], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167cf458fb221d64], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167cf455f29ed588], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167cf4582388f108], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.124/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167cf4582451ee09], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167cf458d27d0a92], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.922051557s] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167cf458d89cae88], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167cf458de499c84], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167cf455f333d13b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167cf45823a5f700], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.121/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167cf45824b97fb6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167cf4590d4b253b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.901851898s] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167cf45913d03c6e], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167cf45919f4166e], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167cf455f3c451df], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8674/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167cf457873765eb], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.115/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167cf4579bbeb61e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167cf457bbdeff8a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 538.979133ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167cf457e4330057], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167cf4582fe3a0f1], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.167cf45aa5b90e4a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [additional-pod.167cf45aa6081f3c], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:50:37.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8674" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.371 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":2,"skipped":875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:50:37.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:50:37.162: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:50:37.170: INFO: Waiting for terminating namespaces to be deleted... May 8 01:50:37.172: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:50:37.191: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:50:37.191: INFO: Container nodereport ready: true, restart count 0 May 8 01:50:37.191: INFO: Container reconcile ready: true, restart count 0 May 8 01:50:37.191: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:50:37.191: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container kube-multus ready: true, restart count 1 May 8 01:50:37.191: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:50:37.191: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:50:37.191: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:50:37.191: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:50:37.191: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:50:37.191: INFO: Container collectd ready: true, restart count 0 May 8 01:50:37.191: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:50:37.191: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:50:37.191: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:50:37.191: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:50:37.191: INFO: Container node-exporter ready: true, restart count 0 May 8 01:50:37.191: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:50:37.191: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:50:37.191: INFO: Container grafana ready: true, restart count 0 May 8 01:50:37.191: INFO: Container prometheus ready: true, restart count 22 May 8 01:50:37.191: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:50:37.191: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-0 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-0 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-1 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-1 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-11 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-11 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-13 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-13 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-15 from sched-pred-8674 started at 2021-05-08 01:50:16 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-15 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-16 from sched-pred-8674 started at 2021-05-08 01:50:16 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-16 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-17 from sched-pred-8674 started at 2021-05-08 01:50:16 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-17 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-2 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-2 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-3 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.191: INFO: Container overcommit-3 ready: true, restart count 0 May 8 01:50:37.191: INFO: overcommit-5 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.192: INFO: Container overcommit-5 ready: true, restart count 0 May 8 01:50:37.192: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:50:37.207: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:50:37.207: INFO: Container nodereport ready: true, restart count 0 May 8 01:50:37.207: INFO: Container reconcile ready: true, restart count 0 May 8 01:50:37.207: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:50:37.207: INFO: Container discover ready: false, restart count 0 May 8 01:50:37.207: INFO: Container init ready: false, restart count 0 May 8 01:50:37.207: INFO: Container install ready: false, restart count 0 May 8 01:50:37.207: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:50:37.207: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:50:37.207: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container kube-multus ready: true, restart count 1 May 8 01:50:37.207: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:50:37.207: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:50:37.207: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:50:37.207: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:50:37.207: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:50:37.207: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:50:37.207: INFO: Container collectd ready: true, restart count 0 May 8 01:50:37.207: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:50:37.207: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:50:37.207: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:50:37.207: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:50:37.207: INFO: Container node-exporter ready: true, restart count 0 May 8 01:50:37.207: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:50:37.207: INFO: Container tas-controller ready: true, restart count 0 May 8 01:50:37.207: INFO: Container tas-extender ready: true, restart count 0 May 8 01:50:37.207: INFO: overcommit-10 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container overcommit-10 ready: true, restart count 0 May 8 01:50:37.207: INFO: overcommit-12 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container overcommit-12 ready: true, restart count 0 May 8 01:50:37.207: INFO: overcommit-14 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container overcommit-14 ready: true, restart count 0 May 8 01:50:37.207: INFO: overcommit-18 from sched-pred-8674 started at 2021-05-08 01:50:16 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container overcommit-18 ready: true, restart count 0 May 8 01:50:37.207: INFO: overcommit-19 from sched-pred-8674 started at 2021-05-08 01:50:16 +0000 UTC (1 container statuses recorded) May 8 01:50:37.207: INFO: Container overcommit-19 ready: true, restart count 0 May 8 01:50:37.208: INFO: overcommit-4 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.208: INFO: Container overcommit-4 ready: true, restart count 0 May 8 01:50:37.208: INFO: overcommit-6 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.208: INFO: Container overcommit-6 ready: true, restart count 0 May 8 01:50:37.208: INFO: overcommit-7 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.208: INFO: Container overcommit-7 ready: true, restart count 0 May 8 01:50:37.208: INFO: overcommit-8 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.208: INFO: Container overcommit-8 ready: true, restart count 0 May 8 01:50:37.208: INFO: overcommit-9 from sched-pred-8674 started at 2021-05-08 01:50:15 +0000 UTC (1 container statuses recorded) May 8 01:50:37.208: INFO: Container overcommit-9 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-df9d595b-c666-444f-bd2c-a1bc98fd61ff 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-df9d595b-c666-444f-bd2c-a1bc98fd61ff off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-df9d595b-c666-444f-bd2c-a1bc98fd61ff [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:50:51.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7493" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.141 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":3,"skipped":1524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:50:51.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:50:51.304: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:50:51.311: INFO: Waiting for terminating namespaces to be deleted... May 8 01:50:51.313: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:50:51.322: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:50:51.322: INFO: Container nodereport ready: true, restart count 0 May 8 01:50:51.322: INFO: Container reconcile ready: true, restart count 0 May 8 01:50:51.322: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:50:51.322: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:50:51.322: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:50:51.322: INFO: Container kube-multus ready: true, restart count 1 May 8 01:50:51.322: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:50:51.322: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:50:51.322: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:50:51.322: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:50:51.322: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:50:51.322: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:50:51.322: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:50:51.322: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:50:51.322: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:50:51.322: INFO: Container collectd ready: true, restart count 0 May 8 01:50:51.322: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:50:51.322: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:50:51.322: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:50:51.322: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:50:51.323: INFO: Container node-exporter ready: true, restart count 0 May 8 01:50:51.323: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:50:51.323: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:50:51.323: INFO: Container grafana ready: true, restart count 0 May 8 01:50:51.323: INFO: Container prometheus ready: true, restart count 22 May 8 01:50:51.323: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:50:51.323: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:50:51.323: INFO: with-labels from sched-pred-7493 started at 2021-05-08 01:50:41 +0000 UTC (1 container statuses recorded) May 8 01:50:51.323: INFO: Container with-labels ready: true, restart count 0 May 8 01:50:51.323: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:50:51.330: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:50:51.330: INFO: Container nodereport ready: true, restart count 0 May 8 01:50:51.330: INFO: Container reconcile ready: true, restart count 0 May 8 01:50:51.330: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:50:51.330: INFO: Container discover ready: false, restart count 0 May 8 01:50:51.330: INFO: Container init ready: false, restart count 0 May 8 01:50:51.330: INFO: Container install ready: false, restart count 0 May 8 01:50:51.330: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:50:51.330: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:50:51.330: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container kube-multus ready: true, restart count 1 May 8 01:50:51.330: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:50:51.330: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:50:51.330: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:50:51.330: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:50:51.330: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:50:51.330: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:50:51.330: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:50:51.330: INFO: Container collectd ready: true, restart count 0 May 8 01:50:51.330: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:50:51.330: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:50:51.330: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:50:51.330: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:50:51.330: INFO: Container node-exporter ready: true, restart count 0 May 8 01:50:51.330: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:50:51.330: INFO: Container tas-controller ready: true, restart count 0 May 8 01:50:51.330: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:51:05.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1395" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.167 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":4,"skipped":1556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:51:05.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 8 01:51:05.475: INFO: Waiting up to 1m0s for all nodes to be ready May 8 01:52:05.524: INFO: Waiting for terminating namespaces to be deleted... May 8 01:52:05.526: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 01:52:05.549: INFO: The status of Pod cmk-init-discover-node2-kd9gg is Succeeded, skipping waiting May 8 01:52:05.549: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 01:52:05.549: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 May 8 01:52:05.549: INFO: ComputeCPUMemFraction for node: node1 May 8 01:52:05.565: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:52:05.565: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:52:05.565: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:52:05.565: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:52:05.565: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:52:05.565: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:52:05.565: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:52:05.565: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:52:05.565: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:52:05.565: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:52:05.565: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 8 01:52:05.565: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 8 01:52:05.565: INFO: ComputeCPUMemFraction for node: node2 May 8 01:52:05.581: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:52:05.581: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:52:05.581: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:52:05.581: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:52:05.581: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:52:05.581: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:52:05.581: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:52:05.581: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:52:05.581: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:52:05.581: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:52:05.581: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:52:05.581: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:52:05.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:52:05.582: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 8 01:52:05.582: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 8 01:52:05.594: INFO: Waiting for running... May 8 01:52:10.655: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:52:15.707: INFO: ComputeCPUMemFraction for node: node1 May 8 01:52:15.724: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:52:15.724: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:52:15.724: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:52:15.724: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:52:15.724: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:52:15.724: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:52:15.724: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:52:15.724: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:52:15.724: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:52:15.724: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:52:15.724: INFO: Pod for on the node: 32b89e33-d09e-4908-95d4-85c9aadc0d33-0, Cpu: 37513, Mem: 87731509248 May 8 01:52:15.724: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 8 01:52:15.724: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:52:15.724: INFO: ComputeCPUMemFraction for node: node2 May 8 01:52:15.738: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:52:15.738: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:52:15.738: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:52:15.738: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:52:15.738: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:52:15.738: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:52:15.738: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:52:15.738: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:52:15.738: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:52:15.738: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:52:15.738: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:52:15.738: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:52:15.738: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:52:15.738: INFO: Pod for on the node: 7977bc4b-ab83-4a87-9cde-79578e02f4fd-0, Cpu: 37963, Mem: 88873371648 May 8 01:52:15.738: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 8 01:52:15.738: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3dad8830-7273-4057-b6de-ec6e4f613df3=testing-taint-value-815a3ddc-a8b0-49aa-a302-d60b1cd421ee:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3e8183b8-79e5-4c97-8d04-b74da0038fee=testing-taint-value-f14dcfcf-4026-4dc2-8f61-c7575b5aab7e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-19c6b4ce-5863-4d93-af5b-0f3b3ca3a117=testing-taint-value-cb3cb4a6-b6e3-402a-9807-027f25a85cb8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-51be7750-8527-44c6-8351-dcec60e170bf=testing-taint-value-34ea4634-38cf-4e6d-94d6-baac2a43af3f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-ef702d9f-db8c-4e03-901c-04cd96122518=testing-taint-value-2ebc46ac-d2f1-47ef-b5c2-643905bb8205:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-004873ca-3e7d-4222-94a6-f848d1568cc9=testing-taint-value-cfa8a841-06c3-4ed8-baa6-f96702d9c05b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0fe5c998-3538-4202-aac8-31a6a539d93c=testing-taint-value-de557f0c-1eb6-484b-be56-f672f4779031:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3a9a34ec-e1f3-400a-9fb7-9c6095e07697=testing-taint-value-dbddab66-f98f-4bbd-933b-fc086a36777d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b16099f5-1021-4325-a30b-99ecf1981e9b=testing-taint-value-d296ef7e-a7ac-4b39-8797-fffbdc3026b6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f0783a24-6a8d-45ca-aadf-adf212f5d0f7=testing-taint-value-34ba1497-37e6-42ee-95b5-84e4a0514ec8:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c4cd94b6-2ac9-487a-903d-cf6239e05f13=testing-taint-value-dae41351-ce78-44f3-8fa9-e6ff4850ef54:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5b2bf8c9-35c9-41d1-869e-05037155c28d=testing-taint-value-9e210fdb-69e5-4b09-a245-7d87388dd5c2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f1b811af-a1a8-4dc9-ba48-f2ee35562296=testing-taint-value-8e0d8eea-e806-43bc-8012-091dd38f89ef:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-54395d10-d333-4229-aea8-e28a5756f2be=testing-taint-value-f81b60c5-dbe7-4c19-8f03-c778cf1aad0f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-10df66f7-27d7-4ff9-a13b-4443066cc39a=testing-taint-value-aa97ce06-dc16-4bfa-a64e-4546357147ed:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-168290d9-49e8-4c1f-ab65-b77e472f0b80=testing-taint-value-1df6952d-0e2f-4fd6-8fbb-b8cdc5cd2654:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-37e96ddb-86ae-449a-ae7f-e398e9018caa=testing-taint-value-65b8d629-bc5b-44ff-a0b1-6cde741fc5f2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-05cba3d4-cad3-48c3-8900-47173d671396=testing-taint-value-4fdbb426-e118-4c0d-8f03-e2fffb9a42d2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-eb700699-fac0-4cda-9b81-130ba21fec0d=testing-taint-value-02657293-b75a-415a-85df-c5b4e1a762c3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-49a6cafc-06bf-4692-8c11-5b877e8a40c7=testing-taint-value-93928be0-4bbf-4984-b30b-699ae84ca51c:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-49a6cafc-06bf-4692-8c11-5b877e8a40c7=testing-taint-value-93928be0-4bbf-4984-b30b-699ae84ca51c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-eb700699-fac0-4cda-9b81-130ba21fec0d=testing-taint-value-02657293-b75a-415a-85df-c5b4e1a762c3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-05cba3d4-cad3-48c3-8900-47173d671396=testing-taint-value-4fdbb426-e118-4c0d-8f03-e2fffb9a42d2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-37e96ddb-86ae-449a-ae7f-e398e9018caa=testing-taint-value-65b8d629-bc5b-44ff-a0b1-6cde741fc5f2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-168290d9-49e8-4c1f-ab65-b77e472f0b80=testing-taint-value-1df6952d-0e2f-4fd6-8fbb-b8cdc5cd2654:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-10df66f7-27d7-4ff9-a13b-4443066cc39a=testing-taint-value-aa97ce06-dc16-4bfa-a64e-4546357147ed:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-54395d10-d333-4229-aea8-e28a5756f2be=testing-taint-value-f81b60c5-dbe7-4c19-8f03-c778cf1aad0f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f1b811af-a1a8-4dc9-ba48-f2ee35562296=testing-taint-value-8e0d8eea-e806-43bc-8012-091dd38f89ef:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5b2bf8c9-35c9-41d1-869e-05037155c28d=testing-taint-value-9e210fdb-69e5-4b09-a245-7d87388dd5c2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c4cd94b6-2ac9-487a-903d-cf6239e05f13=testing-taint-value-dae41351-ce78-44f3-8fa9-e6ff4850ef54:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f0783a24-6a8d-45ca-aadf-adf212f5d0f7=testing-taint-value-34ba1497-37e6-42ee-95b5-84e4a0514ec8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b16099f5-1021-4325-a30b-99ecf1981e9b=testing-taint-value-d296ef7e-a7ac-4b39-8797-fffbdc3026b6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3a9a34ec-e1f3-400a-9fb7-9c6095e07697=testing-taint-value-dbddab66-f98f-4bbd-933b-fc086a36777d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0fe5c998-3538-4202-aac8-31a6a539d93c=testing-taint-value-de557f0c-1eb6-484b-be56-f672f4779031:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-004873ca-3e7d-4222-94a6-f848d1568cc9=testing-taint-value-cfa8a841-06c3-4ed8-baa6-f96702d9c05b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-ef702d9f-db8c-4e03-901c-04cd96122518=testing-taint-value-2ebc46ac-d2f1-47ef-b5c2-643905bb8205:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-51be7750-8527-44c6-8351-dcec60e170bf=testing-taint-value-34ea4634-38cf-4e6d-94d6-baac2a43af3f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-19c6b4ce-5863-4d93-af5b-0f3b3ca3a117=testing-taint-value-cb3cb4a6-b6e3-402a-9807-027f25a85cb8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3e8183b8-79e5-4c97-8d04-b74da0038fee=testing-taint-value-f14dcfcf-4026-4dc2-8f61-c7575b5aab7e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3dad8830-7273-4057-b6de-ec6e4f613df3=testing-taint-value-815a3ddc-a8b0-49aa-a302-d60b1cd421ee:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:52:27.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-18" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:81.673 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":5,"skipped":1710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:52:27.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 8 01:52:27.162: INFO: Waiting up to 1m0s for all nodes to be ready May 8 01:53:27.215: INFO: Waiting for terminating namespaces to be deleted... May 8 01:53:27.218: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 01:53:27.235: INFO: The status of Pod cmk-init-discover-node2-kd9gg is Succeeded, skipping waiting May 8 01:53:27.235: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 01:53:27.235: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 May 8 01:53:35.312: INFO: ComputeCPUMemFraction for node: node1 May 8 01:53:35.328: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:53:35.328: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:53:35.328: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:53:35.328: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:53:35.328: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:53:35.328: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:53:35.328: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:53:35.328: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:53:35.328: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:53:35.328: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:53:35.328: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 8 01:53:35.328: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 8 01:53:35.328: INFO: ComputeCPUMemFraction for node: node2 May 8 01:53:35.344: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:53:35.344: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:53:35.344: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:53:35.344: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:53:35.344: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:53:35.344: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:53:35.344: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:53:35.344: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:53:35.344: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:53:35.344: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:53:35.344: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:53:35.344: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:53:35.344: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:53:35.344: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 8 01:53:35.344: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 8 01:53:35.355: INFO: Waiting for running... May 8 01:53:40.417: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:53:45.468: INFO: ComputeCPUMemFraction for node: node1 May 8 01:53:45.485: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:53:45.485: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:53:45.485: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:53:45.485: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:53:45.485: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:53:45.485: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:53:45.485: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:53:45.485: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:53:45.485: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:53:45.485: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:53:45.485: INFO: Pod for on the node: fbd007c4-856c-4388-995c-3e84bf6a0d9d-0, Cpu: 37513, Mem: 87731509248 May 8 01:53:45.485: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 8 01:53:45.485: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:53:45.485: INFO: ComputeCPUMemFraction for node: node2 May 8 01:53:45.502: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:53:45.502: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:53:45.502: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:53:45.502: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:53:45.502: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:53:45.502: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:53:45.502: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:53:45.502: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:53:45.502: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:53:45.502: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:53:45.502: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:53:45.502: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:53:45.502: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:53:45.502: INFO: Pod for on the node: 8b30e47e-b316-47b0-83e7-016de9fc641f-0, Cpu: 37963, Mem: 88873371648 May 8 01:53:45.502: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 8 01:53:45.502: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Run a ReplicaSet with 4 replicas on node "node1" STEP: Verifying if the test-pod lands on node "node2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:54:05.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2077" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:98.436 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":6,"skipped":2578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:54:05.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 8 01:54:05.598: INFO: Waiting up to 1m0s for all nodes to be ready May 8 01:55:05.647: INFO: Waiting for terminating namespaces to be deleted... May 8 01:55:05.649: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 01:55:05.668: INFO: The status of Pod cmk-init-discover-node2-kd9gg is Succeeded, skipping waiting May 8 01:55:05.668: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 01:55:05.668: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 May 8 01:55:05.668: INFO: ComputeCPUMemFraction for node: node1 May 8 01:55:05.685: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:55:05.685: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:55:05.685: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:55:05.685: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:55:05.685: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:55:05.685: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:55:05.685: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:55:05.685: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:55:05.685: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:55:05.685: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:55:05.685: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 8 01:55:05.685: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 8 01:55:05.685: INFO: ComputeCPUMemFraction for node: node2 May 8 01:55:05.698: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:55:05.698: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:55:05.698: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:55:05.698: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:55:05.699: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:55:05.699: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:55:05.699: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:55:05.699: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:55:05.699: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:55:05.699: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:55:05.699: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:55:05.699: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:55:05.699: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:55:05.699: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 8 01:55:05.699: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 8 01:55:05.712: INFO: Waiting for running... May 8 01:55:10.775: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:55:15.827: INFO: ComputeCPUMemFraction for node: node1 May 8 01:55:15.843: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:55:15.843: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:55:15.843: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:55:15.843: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:55:15.843: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:55:15.843: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:55:15.843: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:55:15.843: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:55:15.843: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:55:15.843: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:55:15.843: INFO: Pod for on the node: eff1a62c-4c55-4d56-a01e-532fc47c557c-0, Cpu: 37513, Mem: 87731509248 May 8 01:55:15.843: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 8 01:55:15.843: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:55:15.843: INFO: ComputeCPUMemFraction for node: node2 May 8 01:55:15.859: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:55:15.859: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:55:15.859: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:55:15.859: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:55:15.859: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:55:15.860: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:55:15.860: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:55:15.860: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:55:15.860: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:55:15.860: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:55:15.860: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:55:15.860: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:55:15.860: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:55:15.860: INFO: Pod for on the node: f29d6746-53c7-4cf8-985a-3b20b5d13a26-0, Cpu: 37963, Mem: 88873371648 May 8 01:55:15.860: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 8 01:55:15.860: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-6964 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-6964, will wait for the garbage collector to delete the pods May 8 01:55:22.039: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.384182ms May 8 01:55:22.740: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.404385ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:55:35.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6964" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:90.086 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":7,"skipped":2737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:55:35.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:55:35.692: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:55:35.700: INFO: Waiting for terminating namespaces to be deleted... May 8 01:55:35.702: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:55:35.720: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:55:35.720: INFO: Container nodereport ready: true, restart count 0 May 8 01:55:35.720: INFO: Container reconcile ready: true, restart count 0 May 8 01:55:35.720: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:55:35.720: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:55:35.720: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:55:35.720: INFO: Container kube-multus ready: true, restart count 1 May 8 01:55:35.720: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:55:35.720: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:55:35.720: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:55:35.720: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:55:35.720: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:55:35.720: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:55:35.720: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:55:35.720: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:55:35.720: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:55:35.720: INFO: Container collectd ready: true, restart count 0 May 8 01:55:35.720: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:55:35.720: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:55:35.720: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:55:35.720: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:55:35.720: INFO: Container node-exporter ready: true, restart count 0 May 8 01:55:35.720: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:55:35.720: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:55:35.720: INFO: Container grafana ready: true, restart count 0 May 8 01:55:35.720: INFO: Container prometheus ready: true, restart count 22 May 8 01:55:35.720: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:55:35.720: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:55:35.720: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:55:35.734: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:55:35.734: INFO: Container nodereport ready: true, restart count 0 May 8 01:55:35.734: INFO: Container reconcile ready: true, restart count 0 May 8 01:55:35.734: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:55:35.734: INFO: Container discover ready: false, restart count 0 May 8 01:55:35.734: INFO: Container init ready: false, restart count 0 May 8 01:55:35.734: INFO: Container install ready: false, restart count 0 May 8 01:55:35.734: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:55:35.734: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:55:35.734: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container kube-multus ready: true, restart count 1 May 8 01:55:35.734: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:55:35.734: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:55:35.734: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:55:35.734: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:55:35.734: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:55:35.734: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:55:35.734: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:55:35.734: INFO: Container collectd ready: true, restart count 0 May 8 01:55:35.734: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:55:35.734: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:55:35.734: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:55:35.734: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:55:35.734: INFO: Container node-exporter ready: true, restart count 0 May 8 01:55:35.734: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:55:35.734: INFO: Container tas-controller ready: true, restart count 0 May 8 01:55:35.734: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a15ae75ba2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a15b3e30b2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a1df11b6eb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2158/filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a2319b5d8d], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.135/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a2324dcdb7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a2513e25e5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 519.059868ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a257d0ce6d], Reason = [Created], Message = [Created container filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007] STEP: Considering event: Type = [Normal], Name = [filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007.167cf4a25db5b502], Reason = [Started], Message = [Started container filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a06a4668a3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2158/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a0bbb82fee], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.134/24]] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a0bc7dfae9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a0db3a4ce0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 515.644582ms] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a0e1ba1a42], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a0e78ea914], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.167cf4a1656d2eff], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod505c1d38-9076-4180-8fa5-1e258de29e16.167cf4a2c1c79ed5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [additional-pod505c1d38-9076-4180-8fa5-1e258de29e16.167cf4a2c21c09b0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:55:46.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2158" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":8,"skipped":3024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:55:46.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:55:46.873: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:55:46.881: INFO: Waiting for terminating namespaces to be deleted... May 8 01:55:46.883: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:55:46.892: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:55:46.892: INFO: Container nodereport ready: true, restart count 0 May 8 01:55:46.892: INFO: Container reconcile ready: true, restart count 0 May 8 01:55:46.892: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:55:46.892: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:55:46.892: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:55:46.892: INFO: Container kube-multus ready: true, restart count 1 May 8 01:55:46.892: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:55:46.892: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:55:46.892: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:55:46.892: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:55:46.892: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:55:46.892: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:55:46.892: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:55:46.892: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:55:46.892: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:55:46.892: INFO: Container collectd ready: true, restart count 0 May 8 01:55:46.892: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:55:46.892: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:55:46.892: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:55:46.892: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:55:46.892: INFO: Container node-exporter ready: true, restart count 0 May 8 01:55:46.893: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:55:46.893: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:55:46.893: INFO: Container grafana ready: true, restart count 0 May 8 01:55:46.893: INFO: Container prometheus ready: true, restart count 22 May 8 01:55:46.893: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:55:46.893: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:55:46.893: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:55:46.900: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:55:46.900: INFO: Container nodereport ready: true, restart count 0 May 8 01:55:46.900: INFO: Container reconcile ready: true, restart count 0 May 8 01:55:46.900: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:55:46.900: INFO: Container discover ready: false, restart count 0 May 8 01:55:46.900: INFO: Container init ready: false, restart count 0 May 8 01:55:46.900: INFO: Container install ready: false, restart count 0 May 8 01:55:46.900: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:55:46.900: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:55:46.900: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container kube-multus ready: true, restart count 1 May 8 01:55:46.900: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:55:46.900: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:55:46.900: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:55:46.900: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:55:46.900: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:55:46.900: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:55:46.900: INFO: Container collectd ready: true, restart count 0 May 8 01:55:46.900: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:55:46.900: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:55:46.900: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:55:46.900: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:55:46.900: INFO: Container node-exporter ready: true, restart count 0 May 8 01:55:46.900: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:55:46.900: INFO: Container tas-controller ready: true, restart count 0 May 8 01:55:46.900: INFO: Container tas-extender ready: true, restart count 0 May 8 01:55:46.900: INFO: filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007 from sched-pred-2158 started at 2021-05-08 01:55:42 +0000 UTC (1 container statuses recorded) May 8 01:55:46.900: INFO: Container filler-pod-5dfcca54-6585-411e-a2ef-431d1f83f007 ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.167cf4a46d16dc11], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.167cf4a46d6999ed], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:55:53.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2810" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.150 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":9,"skipped":3403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:55:54.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:55:54.035: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:55:54.042: INFO: Waiting for terminating namespaces to be deleted... May 8 01:55:54.044: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:55:54.057: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:55:54.057: INFO: Container nodereport ready: true, restart count 0 May 8 01:55:54.057: INFO: Container reconcile ready: true, restart count 0 May 8 01:55:54.057: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:55:54.057: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:55:54.057: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:55:54.057: INFO: Container kube-multus ready: true, restart count 1 May 8 01:55:54.057: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:55:54.057: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:55:54.057: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:55:54.057: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:55:54.057: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:55:54.057: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:55:54.057: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:55:54.057: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:55:54.057: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:55:54.057: INFO: Container collectd ready: true, restart count 0 May 8 01:55:54.057: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:55:54.057: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:55:54.057: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:55:54.057: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:55:54.057: INFO: Container node-exporter ready: true, restart count 0 May 8 01:55:54.057: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:55:54.057: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:55:54.057: INFO: Container grafana ready: true, restart count 0 May 8 01:55:54.057: INFO: Container prometheus ready: true, restart count 22 May 8 01:55:54.057: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:55:54.057: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:55:54.057: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:55:54.068: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:55:54.068: INFO: Container nodereport ready: true, restart count 0 May 8 01:55:54.068: INFO: Container reconcile ready: true, restart count 0 May 8 01:55:54.068: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:55:54.068: INFO: Container discover ready: false, restart count 0 May 8 01:55:54.068: INFO: Container init ready: false, restart count 0 May 8 01:55:54.068: INFO: Container install ready: false, restart count 0 May 8 01:55:54.068: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:55:54.068: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:55:54.068: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container kube-multus ready: true, restart count 1 May 8 01:55:54.068: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:55:54.068: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:55:54.068: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:55:54.068: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:55:54.068: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:55:54.068: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:55:54.069: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:55:54.069: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:55:54.069: INFO: Container collectd ready: true, restart count 0 May 8 01:55:54.069: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:55:54.069: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:55:54.069: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:55:54.069: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:55:54.069: INFO: Container node-exporter ready: true, restart count 0 May 8 01:55:54.069: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:55:54.069: INFO: Container tas-controller ready: true, restart count 0 May 8 01:55:54.069: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-de8fd16a-57b0-40fb-9c56-aa8bdf54f748 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a4ae48d470], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1891/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a50a8d4052], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.136/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a50b3fce5b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a5278f7111], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 474.973404ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a52de0cb0d], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a533be447a], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a59e22e913], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167cf4a59ff483c5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167cf4a5a04700e8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [without-toleration.167cf4a5a0972e20], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-9cvpp" : object "sched-pred-1891"/"default-token-9cvpp" not registered] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167cf4a59ff483c5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167cf4a5a04700e8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a4ae48d470], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1891/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a50a8d4052], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.136/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a50b3fce5b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a5278f7111], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 474.973404ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a52de0cb0d], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a533be447a], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167cf4a59e22e913], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [without-toleration.167cf4a5a0972e20], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-9cvpp" : object "sched-pred-1891"/"default-token-9cvpp" not registered] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.167cf4a64bce560d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1891/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-de8fd16a-57b0-40fb-9c56-aa8bdf54f748 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-de8fd16a-57b0-40fb-9c56-aa8bdf54f748 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-327faf47-343c-469d-bb73-69fe4625a560=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:56:01.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1891" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.174 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":10,"skipped":4155,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:56:01.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 8 01:56:01.214: INFO: Waiting up to 1m0s for all nodes to be ready May 8 01:57:01.262: INFO: Waiting for terminating namespaces to be deleted... May 8 01:57:01.264: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 8 01:57:01.282: INFO: The status of Pod cmk-init-discover-node2-kd9gg is Succeeded, skipping waiting May 8 01:57:01.282: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 8 01:57:01.282: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 8 01:57:05.305: INFO: ComputeCPUMemFraction for node: node1 May 8 01:57:05.322: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:57:05.322: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:57:05.322: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:57:05.322: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:57:05.322: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:57:05.322: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:57:05.322: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:57:05.322: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:57:05.322: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:57:05.322: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:57:05.322: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 8 01:57:05.322: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 8 01:57:05.322: INFO: ComputeCPUMemFraction for node: node2 May 8 01:57:05.338: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:57:05.338: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:57:05.338: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:57:05.338: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:57:05.338: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:57:05.338: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:57:05.338: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:57:05.338: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:57:05.338: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:57:05.338: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:57:05.338: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:57:05.338: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:57:05.338: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:57:05.338: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 8 01:57:05.338: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 8 01:57:05.338: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 8 01:57:05.352: INFO: Waiting for running... May 8 01:57:10.413: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:57:15.464: INFO: ComputeCPUMemFraction for node: node1 May 8 01:57:15.482: INFO: Pod for on the node: cmk-qzhwr, Cpu: 200, Mem: 419430400 May 8 01:57:15.482: INFO: Pod for on the node: kube-flannel-qm7lv, Cpu: 150, Mem: 64000000 May 8 01:57:15.482: INFO: Pod for on the node: kube-multus-ds-amd64-fxgdb, Cpu: 100, Mem: 94371840 May 8 01:57:15.482: INFO: Pod for on the node: kube-proxy-bms7z, Cpu: 100, Mem: 209715200 May 8 01:57:15.482: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 8 01:57:15.482: INFO: Pod for on the node: node-feature-discovery-worker-t66pk, Cpu: 100, Mem: 209715200 May 8 01:57:15.482: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp, Cpu: 100, Mem: 209715200 May 8 01:57:15.482: INFO: Pod for on the node: collectd-h2lg2, Cpu: 300, Mem: 629145600 May 8 01:57:15.482: INFO: Pod for on the node: node-exporter-qv7mz, Cpu: 112, Mem: 209715200 May 8 01:57:15.482: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 8 01:57:15.482: INFO: Pod for on the node: f1c0187d-8045-4090-a176-2f788866e655-0, Cpu: 45213, Mem: 105619972505 May 8 01:57:15.482: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 8 01:57:15.482: INFO: Node: node1, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 8 01:57:15.482: INFO: ComputeCPUMemFraction for node: node2 May 8 01:57:15.495: INFO: Pod for on the node: cmk-gvh7j, Cpu: 200, Mem: 419430400 May 8 01:57:15.495: INFO: Pod for on the node: cmk-init-discover-node2-kd9gg, Cpu: 300, Mem: 629145600 May 8 01:57:15.495: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-94s58, Cpu: 100, Mem: 209715200 May 8 01:57:15.495: INFO: Pod for on the node: kube-flannel-htqkx, Cpu: 150, Mem: 64000000 May 8 01:57:15.495: INFO: Pod for on the node: kube-multus-ds-amd64-g98hm, Cpu: 100, Mem: 94371840 May 8 01:57:15.495: INFO: Pod for on the node: kube-proxy-rgw7h, Cpu: 100, Mem: 209715200 May 8 01:57:15.495: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-k9cj2, Cpu: 50, Mem: 64000000 May 8 01:57:15.495: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 8 01:57:15.495: INFO: Pod for on the node: node-feature-discovery-worker-wp5n6, Cpu: 100, Mem: 209715200 May 8 01:57:15.495: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z, Cpu: 100, Mem: 209715200 May 8 01:57:15.495: INFO: Pod for on the node: collectd-p5gbt, Cpu: 300, Mem: 629145600 May 8 01:57:15.495: INFO: Pod for on the node: node-exporter-4bcls, Cpu: 112, Mem: 209715200 May 8 01:57:15.495: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f, Cpu: 200, Mem: 419430400 May 8 01:57:15.495: INFO: Pod for on the node: 702334e9-8076-4a5d-9133-989afaa6f2ee-0, Cpu: 45663, Mem: 106761834905 May 8 01:57:15.495: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 8 01:57:15.495: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 8 01:57:15.495: INFO: Node: node2, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:57:25.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6031" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:84.347 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":11,"skipped":4463,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 8 01:57:25.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 8 01:57:25.562: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 8 01:57:25.570: INFO: Waiting for terminating namespaces to be deleted... May 8 01:57:25.572: INFO: Logging pods the apiserver thinks is on node node1 before test May 8 01:57:25.581: INFO: cmk-qzhwr from kube-system started at 2021-05-08 00:41:14 +0000 UTC (2 container statuses recorded) May 8 01:57:25.581: INFO: Container nodereport ready: true, restart count 0 May 8 01:57:25.581: INFO: Container reconcile ready: true, restart count 0 May 8 01:57:25.581: INFO: kube-flannel-qm7lv from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:57:25.581: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:57:25.581: INFO: kube-multus-ds-amd64-fxgdb from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:57:25.581: INFO: Container kube-multus ready: true, restart count 1 May 8 01:57:25.581: INFO: kube-proxy-bms7z from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:57:25.581: INFO: Container kube-proxy ready: true, restart count 2 May 8 01:57:25.581: INFO: nginx-proxy-node1 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:57:25.581: INFO: Container nginx-proxy ready: true, restart count 1 May 8 01:57:25.581: INFO: node-feature-discovery-worker-t66pk from kube-system started at 2021-05-08 00:41:16 +0000 UTC (1 container statuses recorded) May 8 01:57:25.581: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:57:25.581: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zwmrp from kube-system started at 2021-05-08 00:41:13 +0000 UTC (1 container statuses recorded) May 8 01:57:25.581: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:57:25.581: INFO: collectd-h2lg2 from monitoring started at 2021-05-08 00:41:45 +0000 UTC (3 container statuses recorded) May 8 01:57:25.581: INFO: Container collectd ready: true, restart count 0 May 8 01:57:25.581: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:57:25.581: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:57:25.581: INFO: node-exporter-qv7mz from monitoring started at 2021-05-08 00:41:15 +0000 UTC (2 container statuses recorded) May 8 01:57:25.581: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:57:25.581: INFO: Container node-exporter ready: true, restart count 0 May 8 01:57:25.581: INFO: prometheus-k8s-0 from monitoring started at 2021-05-08 00:41:17 +0000 UTC (5 container statuses recorded) May 8 01:57:25.581: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 8 01:57:25.581: INFO: Container grafana ready: true, restart count 0 May 8 01:57:25.581: INFO: Container prometheus ready: true, restart count 22 May 8 01:57:25.581: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 8 01:57:25.582: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 8 01:57:25.582: INFO: pod-with-pod-antiaffinity from sched-priority-6031 started at 2021-05-08 01:57:15 +0000 UTC (1 container statuses recorded) May 8 01:57:25.582: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 May 8 01:57:25.582: INFO: Logging pods the apiserver thinks is on node node2 before test May 8 01:57:25.588: INFO: cmk-gvh7j from kube-system started at 2021-05-07 20:11:49 +0000 UTC (2 container statuses recorded) May 8 01:57:25.589: INFO: Container nodereport ready: true, restart count 0 May 8 01:57:25.589: INFO: Container reconcile ready: true, restart count 0 May 8 01:57:25.589: INFO: cmk-init-discover-node2-kd9gg from kube-system started at 2021-05-07 20:11:26 +0000 UTC (3 container statuses recorded) May 8 01:57:25.589: INFO: Container discover ready: false, restart count 0 May 8 01:57:25.589: INFO: Container init ready: false, restart count 0 May 8 01:57:25.589: INFO: Container install ready: false, restart count 0 May 8 01:57:25.589: INFO: cmk-webhook-6c9d5f8578-94s58 from kube-system started at 2021-05-07 20:11:49 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container cmk-webhook ready: true, restart count 0 May 8 01:57:25.589: INFO: kube-flannel-htqkx from kube-system started at 2021-05-07 20:02:02 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container kube-flannel ready: true, restart count 2 May 8 01:57:25.589: INFO: kube-multus-ds-amd64-g98hm from kube-system started at 2021-05-07 20:02:10 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container kube-multus ready: true, restart count 1 May 8 01:57:25.589: INFO: kube-proxy-rgw7h from kube-system started at 2021-05-07 20:01:27 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container kube-proxy ready: true, restart count 1 May 8 01:57:25.589: INFO: kubernetes-dashboard-86c6f9df5b-k9cj2 from kube-system started at 2021-05-07 20:02:35 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 8 01:57:25.589: INFO: nginx-proxy-node2 from kube-system started at 2021-05-07 20:07:46 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container nginx-proxy ready: true, restart count 2 May 8 01:57:25.589: INFO: node-feature-discovery-worker-wp5n6 from kube-system started at 2021-05-07 20:08:19 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container nfd-worker ready: true, restart count 0 May 8 01:57:25.589: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-tkw8z from kube-system started at 2021-05-07 20:09:23 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container kube-sriovdp ready: true, restart count 0 May 8 01:57:25.589: INFO: collectd-p5gbt from monitoring started at 2021-05-07 20:18:33 +0000 UTC (3 container statuses recorded) May 8 01:57:25.589: INFO: Container collectd ready: true, restart count 0 May 8 01:57:25.589: INFO: Container collectd-exporter ready: true, restart count 0 May 8 01:57:25.589: INFO: Container rbac-proxy ready: true, restart count 0 May 8 01:57:25.589: INFO: node-exporter-4bcls from monitoring started at 2021-05-07 20:12:42 +0000 UTC (2 container statuses recorded) May 8 01:57:25.589: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 8 01:57:25.589: INFO: Container node-exporter ready: true, restart count 0 May 8 01:57:25.589: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-8z46f from monitoring started at 2021-05-07 20:15:36 +0000 UTC (2 container statuses recorded) May 8 01:57:25.589: INFO: Container tas-controller ready: true, restart count 0 May 8 01:57:25.589: INFO: Container tas-extender ready: true, restart count 0 May 8 01:57:25.589: INFO: pod-with-label-security-s1 from sched-priority-6031 started at 2021-05-08 01:57:01 +0000 UTC (1 container statuses recorded) May 8 01:57:25.589: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f61f7aeb-25b5-4b6d-b96e-aee174317405=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-715b2fe2-e024-445d-a6cd-927269b3a60f testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-715b2fe2-e024-445d-a6cd-927269b3a60f off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-715b2fe2-e024-445d-a6cd-927269b3a60f STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f61f7aeb-25b5-4b6d-b96e-aee174317405=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 8 01:57:33.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7863" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.154 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":12,"skipped":4483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 8 01:57:33.711: INFO: Running AfterSuite actions on all nodes May 8 01:57:33.711: INFO: Running AfterSuite actions on node 1 May 8 01:57:33.711: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 538.463 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 8m59.664448996s Test Suite Passed