I0529 01:06:16.163704 21 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0529 01:06:16.163844 21 e2e.go:129] Starting e2e run "cb2079e3-46ec-4df8-bb88-1e3c4c23dbfb" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1622250374 - Will randomize all specs Will run 12 of 5484 specs May 29 01:06:16.184: INFO: >>> kubeConfig: /root/.kube/config May 29 01:06:16.189: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 29 01:06:16.217: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 01:06:16.280: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 29 01:06:16.280: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 01:06:16.280: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 29 01:06:16.280: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 29 01:06:16.297: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 29 01:06:16.297: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 29 01:06:16.297: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 29 01:06:16.297: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 29 01:06:16.297: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 29 01:06:16.297: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 29 01:06:16.297: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 29 01:06:16.297: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 29 01:06:16.297: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 29 01:06:16.297: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 29 01:06:16.297: INFO: e2e test version: v1.19.11 May 29 01:06:16.298: INFO: kube-apiserver version: v1.19.8 May 29 01:06:16.298: INFO: >>> kubeConfig: /root/.kube/config May 29 01:06:16.303: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:06:16.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption May 29 01:06:16.324: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 29 01:06:16.327: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 29 01:06:16.337: INFO: Waiting up to 1m0s for all nodes to be ready May 29 01:07:16.390: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:07:50.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3362" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:94.402 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":1,"skipped":79,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:07:50.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:07:50.726: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:07:50.735: INFO: Waiting for terminating namespaces to be deleted... May 29 01:07:50.738: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:07:50.749: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:07:50.749: INFO: Container discover ready: false, restart count 0 May 29 01:07:50.749: INFO: Container init ready: false, restart count 0 May 29 01:07:50.749: INFO: Container install ready: false, restart count 0 May 29 01:07:50.749: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:07:50.749: INFO: Container nodereport ready: true, restart count 0 May 29 01:07:50.749: INFO: Container reconcile ready: true, restart count 0 May 29 01:07:50.749: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:07:50.749: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:07:50.749: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container kube-multus ready: true, restart count 1 May 29 01:07:50.749: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:07:50.749: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:07:50.749: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:07:50.749: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:07:50.749: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:07:50.749: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:07:50.749: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:07:50.749: INFO: Container collectd ready: true, restart count 0 May 29 01:07:50.749: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:07:50.749: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:07:50.749: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:07:50.749: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:07:50.749: INFO: Container node-exporter ready: true, restart count 0 May 29 01:07:50.749: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:07:50.749: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:07:50.749: INFO: Container grafana ready: true, restart count 0 May 29 01:07:50.749: INFO: Container prometheus ready: true, restart count 1 May 29 01:07:50.749: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:07:50.749: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:07:50.749: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:07:50.749: INFO: Container tas-controller ready: true, restart count 0 May 29 01:07:50.749: INFO: Container tas-extender ready: true, restart count 0 May 29 01:07:50.749: INFO: high from sched-preemption-3362 started at 2021-05-29 01:07:26 +0000 UTC (1 container statuses recorded) May 29 01:07:50.749: INFO: Container high ready: true, restart count 0 May 29 01:07:50.749: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:07:50.768: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:07:50.768: INFO: Container nodereport ready: true, restart count 0 May 29 01:07:50.768: INFO: Container reconcile ready: true, restart count 0 May 29 01:07:50.768: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:07:50.768: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container kube-multus ready: true, restart count 1 May 29 01:07:50.768: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:07:50.768: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:07:50.768: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:07:50.768: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:07:50.768: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:07:50.768: INFO: Container collectd ready: true, restart count 0 May 29 01:07:50.768: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:07:50.768: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:07:50.768: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:07:50.768: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:07:50.768: INFO: Container node-exporter ready: true, restart count 0 May 29 01:07:50.768: INFO: low-1 from sched-preemption-3362 started at 2021-05-29 01:07:30 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container low-1 ready: true, restart count 0 May 29 01:07:50.768: INFO: medium from sched-preemption-3362 started at 2021-05-29 01:07:44 +0000 UTC (1 container statuses recorded) May 29 01:07:50.768: INFO: Container medium ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-26714d39-ab0f-4279-b037-86dd736f245a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-26714d39-ab0f-4279-b037-86dd736f245a off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-26714d39-ab0f-4279-b037-86dd736f245a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:07:58.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9242" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.145 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":2,"skipped":184,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:07:58.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:07:58.873: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:07:58.881: INFO: Waiting for terminating namespaces to be deleted... May 29 01:07:58.884: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:07:58.893: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:07:58.893: INFO: Container discover ready: false, restart count 0 May 29 01:07:58.893: INFO: Container init ready: false, restart count 0 May 29 01:07:58.893: INFO: Container install ready: false, restart count 0 May 29 01:07:58.893: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:07:58.893: INFO: Container nodereport ready: true, restart count 0 May 29 01:07:58.893: INFO: Container reconcile ready: true, restart count 0 May 29 01:07:58.893: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:07:58.893: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:07:58.893: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container kube-multus ready: true, restart count 1 May 29 01:07:58.893: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:07:58.893: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:07:58.893: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:07:58.893: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:07:58.893: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:07:58.893: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:07:58.893: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:07:58.893: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:07:58.893: INFO: Container collectd ready: true, restart count 0 May 29 01:07:58.893: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:07:58.893: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:07:58.893: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:07:58.893: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:07:58.894: INFO: Container node-exporter ready: true, restart count 0 May 29 01:07:58.894: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:07:58.894: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:07:58.894: INFO: Container grafana ready: true, restart count 0 May 29 01:07:58.894: INFO: Container prometheus ready: true, restart count 1 May 29 01:07:58.894: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:07:58.894: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:07:58.894: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:07:58.894: INFO: Container tas-controller ready: true, restart count 0 May 29 01:07:58.894: INFO: Container tas-extender ready: true, restart count 0 May 29 01:07:58.894: INFO: high from sched-preemption-3362 started at 2021-05-29 01:07:26 +0000 UTC (1 container statuses recorded) May 29 01:07:58.894: INFO: Container high ready: true, restart count 0 May 29 01:07:58.894: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:07:58.904: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:07:58.904: INFO: Container nodereport ready: true, restart count 0 May 29 01:07:58.904: INFO: Container reconcile ready: true, restart count 0 May 29 01:07:58.904: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:07:58.904: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container kube-multus ready: true, restart count 1 May 29 01:07:58.904: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:07:58.904: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:07:58.904: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:07:58.904: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:07:58.904: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:07:58.904: INFO: Container collectd ready: true, restart count 0 May 29 01:07:58.904: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:07:58.904: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:07:58.904: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:07:58.904: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:07:58.904: INFO: Container node-exporter ready: true, restart count 0 May 29 01:07:58.904: INFO: with-labels from sched-pred-9242 started at 2021-05-29 01:07:54 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container with-labels ready: true, restart count 0 May 29 01:07:58.904: INFO: low-1 from sched-preemption-3362 started at 2021-05-29 01:07:30 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container low-1 ready: false, restart count 0 May 29 01:07:58.904: INFO: medium from sched-preemption-3362 started at 2021-05-29 01:07:44 +0000 UTC (1 container statuses recorded) May 29 01:07:58.904: INFO: Container medium ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.168364372db24984], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:07:59.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7838" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":3,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:07:59.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:07:59.971: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:07:59.980: INFO: Waiting for terminating namespaces to be deleted... May 29 01:07:59.982: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:08:00.007: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:08:00.007: INFO: Container discover ready: false, restart count 0 May 29 01:08:00.007: INFO: Container init ready: false, restart count 0 May 29 01:08:00.007: INFO: Container install ready: false, restart count 0 May 29 01:08:00.007: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:08:00.007: INFO: Container nodereport ready: true, restart count 0 May 29 01:08:00.007: INFO: Container reconcile ready: true, restart count 0 May 29 01:08:00.007: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:08:00.007: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:08:00.007: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container kube-multus ready: true, restart count 1 May 29 01:08:00.007: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:08:00.007: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:08:00.007: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:08:00.007: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:08:00.007: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:08:00.007: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:08:00.007: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:08:00.007: INFO: Container collectd ready: true, restart count 0 May 29 01:08:00.007: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:08:00.007: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:08:00.007: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:08:00.007: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:08:00.007: INFO: Container node-exporter ready: true, restart count 0 May 29 01:08:00.007: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:08:00.007: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:08:00.007: INFO: Container grafana ready: true, restart count 0 May 29 01:08:00.007: INFO: Container prometheus ready: true, restart count 1 May 29 01:08:00.007: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:08:00.007: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:08:00.007: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:08:00.007: INFO: Container tas-controller ready: true, restart count 0 May 29 01:08:00.007: INFO: Container tas-extender ready: true, restart count 0 May 29 01:08:00.007: INFO: high from sched-preemption-3362 started at 2021-05-29 01:07:26 +0000 UTC (1 container statuses recorded) May 29 01:08:00.007: INFO: Container high ready: false, restart count 0 May 29 01:08:00.007: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:08:00.017: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:08:00.017: INFO: Container nodereport ready: true, restart count 0 May 29 01:08:00.017: INFO: Container reconcile ready: true, restart count 0 May 29 01:08:00.017: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:08:00.017: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container kube-multus ready: true, restart count 1 May 29 01:08:00.017: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:08:00.017: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:08:00.017: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:08:00.017: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:08:00.017: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:08:00.017: INFO: Container collectd ready: true, restart count 0 May 29 01:08:00.017: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:08:00.017: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:08:00.017: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:08:00.017: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:08:00.017: INFO: Container node-exporter ready: true, restart count 0 May 29 01:08:00.017: INFO: with-labels from sched-pred-9242 started at 2021-05-29 01:07:54 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container with-labels ready: true, restart count 0 May 29 01:08:00.017: INFO: low-1 from sched-preemption-3362 started at 2021-05-29 01:07:30 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container low-1 ready: false, restart count 0 May 29 01:08:00.017: INFO: medium from sched-preemption-3362 started at 2021-05-29 01:07:44 +0000 UTC (1 container statuses recorded) May 29 01:08:00.017: INFO: Container medium ready: false, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1850b304-41ce-4dc9-8c84-0a3719926f96=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-f2a1f1f6-d118-4a9b-b2eb-348b3d7d49ce testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-f2a1f1f6-d118-4a9b-b2eb-348b3d7d49ce off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-f2a1f1f6-d118-4a9b-b2eb-348b3d7d49ce STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1850b304-41ce-4dc9-8c84-0a3719926f96=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:08:08.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6633" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":4,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:08:08.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 29 01:08:08.168: INFO: Waiting up to 1m0s for all nodes to be ready May 29 01:09:08.218: INFO: Waiting for terminating namespaces to be deleted... May 29 01:09:08.220: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 01:09:08.239: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 29 01:09:08.239: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 01:09:08.239: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 May 29 01:09:16.332: INFO: ComputeCPUMemFraction for node: node2 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:09:16.332: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 May 29 01:09:16.332: INFO: ComputeCPUMemFraction for node: node1 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:09:16.332: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:09:16.332: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 May 29 01:09:16.342: INFO: Waiting for running... May 29 01:09:21.406: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:09:26.482: INFO: ComputeCPUMemFraction for node: node2 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Node: node2, totalRequestedCPUResource: 384100, cpuAllocatableMil: 77000, cpuFraction: 1 May 29 01:09:26.482: INFO: Node: node2, totalRequestedMemResource: 893479424000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:09:26.482: INFO: ComputeCPUMemFraction for node: node1 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Pod for on the node: f3c6762d-0e02-4117-8905-1b2c548ae4b2-0, Cpu: 38400, Mem: 89337456640 May 29 01:09:26.482: INFO: Node: node1, totalRequestedCPUResource: 614500, cpuAllocatableMil: 77000, cpuFraction: 1 May 29 01:09:26.482: INFO: Node: node1, totalRequestedMemResource: 1429504163840, memAllocatableVal: 178884628480, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:09:52.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5595" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:104.420 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":5,"skipped":1512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:09:52.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 29 01:09:52.594: INFO: Waiting up to 1m0s for all nodes to be ready May 29 01:10:52.644: INFO: Waiting for terminating namespaces to be deleted... May 29 01:10:52.646: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 01:10:52.664: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 29 01:10:52.664: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 01:10:52.664: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 29 01:10:56.707: INFO: ComputeCPUMemFraction for node: node1 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:10:56.708: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 May 29 01:10:56.708: INFO: ComputeCPUMemFraction for node: node2 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:10:56.708: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:10:56.708: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 May 29 01:10:56.719: INFO: Waiting for running... May 29 01:11:01.782: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:11:06.851: INFO: ComputeCPUMemFraction for node: node1 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:11:06.851: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:11:06.851: INFO: ComputeCPUMemFraction for node: node2 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 29 01:11:06.851: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:11:06.851: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:11:20.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9823" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:88.328 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":6,"skipped":1814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:11:20.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:11:20.928: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:11:20.936: INFO: Waiting for terminating namespaces to be deleted... May 29 01:11:20.938: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:11:20.954: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:11:20.954: INFO: Container discover ready: false, restart count 0 May 29 01:11:20.954: INFO: Container init ready: false, restart count 0 May 29 01:11:20.954: INFO: Container install ready: false, restart count 0 May 29 01:11:20.954: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:11:20.954: INFO: Container nodereport ready: true, restart count 0 May 29 01:11:20.954: INFO: Container reconcile ready: true, restart count 0 May 29 01:11:20.954: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:11:20.954: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:11:20.954: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container kube-multus ready: true, restart count 1 May 29 01:11:20.954: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:11:20.954: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:11:20.954: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:11:20.954: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:11:20.954: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:11:20.954: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:11:20.954: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:11:20.954: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:11:20.954: INFO: Container collectd ready: true, restart count 0 May 29 01:11:20.955: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:11:20.955: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:11:20.955: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:11:20.955: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:11:20.955: INFO: Container node-exporter ready: true, restart count 0 May 29 01:11:20.955: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:11:20.955: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:11:20.955: INFO: Container grafana ready: true, restart count 0 May 29 01:11:20.955: INFO: Container prometheus ready: true, restart count 1 May 29 01:11:20.955: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:11:20.955: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:11:20.955: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:11:20.955: INFO: Container tas-controller ready: true, restart count 0 May 29 01:11:20.955: INFO: Container tas-extender ready: true, restart count 0 May 29 01:11:20.955: INFO: pod-with-pod-antiaffinity from sched-priority-9823 started at 2021-05-29 01:11:06 +0000 UTC (1 container statuses recorded) May 29 01:11:20.955: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 May 29 01:11:20.955: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:11:20.972: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:11:20.972: INFO: Container nodereport ready: true, restart count 0 May 29 01:11:20.972: INFO: Container reconcile ready: true, restart count 0 May 29 01:11:20.972: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:11:20.972: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:11:20.972: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:11:20.972: INFO: Container kube-multus ready: true, restart count 1 May 29 01:11:20.972: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:11:20.972: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:11:20.972: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:11:20.972: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:11:20.972: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:11:20.972: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:11:20.972: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:11:20.972: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:11:20.972: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:11:20.972: INFO: Container collectd ready: true, restart count 0 May 29 01:11:20.972: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:11:20.972: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:11:20.972: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:11:20.973: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:11:20.973: INFO: Container node-exporter ready: true, restart count 0 May 29 01:11:20.973: INFO: pod-with-label-security-s1 from sched-priority-9823 started at 2021-05-29 01:10:52 +0000 UTC (1 container statuses recorded) May 29 01:11:20.973: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 29 01:11:21.004: INFO: Pod cmk-jhzjr requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod cmk-lbg6n requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.004: INFO: Pod cmk-webhook-6c9d5f8578-kt8bp requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod kube-flannel-2tjjt requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod kube-flannel-d9wsg requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.004: INFO: Pod kube-multus-ds-amd64-c9cj2 requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.004: INFO: Pod kube-multus-ds-amd64-x7826 requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod kube-proxy-lsngv requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod kube-proxy-z5czn requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.004: INFO: Pod kubernetes-dashboard-86c6f9df5b-c5sbq requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod kubernetes-metrics-scraper-678c97765c-wblkm requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.004: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.005: INFO: Pod node-feature-discovery-worker-2qfpd requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.005: INFO: Pod node-feature-discovery-worker-5x4qg requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.005: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod collectd-k6rzg requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.005: INFO: Pod collectd-qw9nd requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod node-exporter-khdpg requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod node-exporter-nsrbd requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.005: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Pod pod-with-label-security-s1 requesting local ephemeral resource =0 on Node node2 May 29 01:11:21.005: INFO: Pod pod-with-pod-antiaffinity requesting local ephemeral resource =0 on Node node1 May 29 01:11:21.005: INFO: Using pod capacity: 40542413347 May 29 01:11:21.005: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 May 29 01:11:21.005: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 29 01:11:21.197: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.168364663aaf0d5d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16836467c95fb9a5], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.35/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16836467d50fcc18], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16836467f150d159], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 474.014779ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.168364680762d143], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.1683646833ee2c81], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168364663b356b5c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.168364677183a190], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.33/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16836467a3bda902], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16836467c06c5fed], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 481.203743ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16836467d4c1fa7a], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16836468278d5e36], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16836466405e27b3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1683646876e1f6ab], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.112/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1683646878777481], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16836469591fa253], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.769111308s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.168364696058182c], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.1683646966bb89b1], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.1683646640e05ee4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168364687897a163], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.118/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168364687944e739], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168364697670e551], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 4.247507414s] STEP: Considering event: Type = [Normal], Name = [overcommit-11.168364697dc3cce3], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.1683646984230f28], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168364664162b1d6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1683646872b422ac], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.111/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168364687710ef25], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1683646920aa0b8b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.845377365s] STEP: Considering event: Type = [Normal], Name = [overcommit-12.1683646927e4e72e], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.168364692e8ca51a], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1683646641eb3a34], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1683646878a409cc], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.117/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.1683646879496b0b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168364699115a52c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 4.694220311s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16836469984f15cf], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.168364699ecdcab4], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.168364664270d019], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1683646872bfc046], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.113/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16836468770f4b4b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16836468ffb88d47], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.292786312s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.1683646909d8044f], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16836469159dea1b], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168364664304ad43], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16836467e7398adc], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.109/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168364682698f47f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16836468448c8853], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 502.492889ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.1683646857b3a862], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.168364687bac9724], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16836466438cd0f6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168364681d8592af], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.110/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.1683646857b59c16], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16836468800ec659], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 676.924306ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.168364688c56e26b], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16836468985ded37], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168364664418a224], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.1683646872a7ff07], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.116/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.1683646876a03a52], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.168364689e9ef9b6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 670.993035ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16836468a592a817], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16836468ab5c54bb], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.1683646644acdd36], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.168364687253b82c], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.114/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.1683646876a05a06], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16836468bf625d15], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.220667873s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16836468c6a48e94], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16836468cc8aeb18], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1683646645472eda], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1683646873e25299], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.115/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168364687714ff62], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.168364693bf00bd9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.302678399s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.1683646942fbf469], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16836469495044e3], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168364663bcaaeeb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1683646811f38d55], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.37/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1683646818161079], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16836468656015d7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.296689224s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.168364686da861e5], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.1683646874a7918d], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168364663c7c469a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1683646825109f2c], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.40/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.1683646831f411ab], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16836468840586cc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.37686628s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.168364688af8b5cd], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16836468910fa24a], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.168364663ced5a1c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.1683646832b21a68], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.41/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.1683646833f5cf19], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16836468bf67afeb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.33948795s] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16836468c7460c7e], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16836468cd5713aa], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168364663d7315a8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168364680d7c44e9], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.38/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168364681808d2f9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168364684947fb6d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 826.214109ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.168364684f9b5723], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.1683646855a5528c], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168364663e077236], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16836467ea10f3f6], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.34/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168364680d38fdc8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168364682b2dfa15], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 502.584684ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168364683889caf0], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.168364684870f998], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.168364663ea6a75f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1683646831df1da0], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.39/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.1683646833cd2652], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16836468a14bbc8d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.837004198s] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16836468a7e8008e], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16836468ae00eb2c], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168364663f3624ff], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168364671326ed80], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.32/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.1683646732042c22], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.168364674d6a0a30], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 459.648624ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.1683646767425ddd], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16836467d416a964], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168364663fd0e2b2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3215/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16836467ce90ca37], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.36/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16836467e9b9f966], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168364680d7c9b62], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 599.949402ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.1683646827b85233], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.168364683d68a3be], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16836469c7716add], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:11:37.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3215" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.380 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":7,"skipped":2166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:11:37.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 29 01:11:37.319: INFO: Waiting up to 1m0s for all nodes to be ready May 29 01:12:37.371: INFO: Waiting for terminating namespaces to be deleted... May 29 01:12:37.373: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 01:12:37.392: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 29 01:12:37.392: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 01:12:37.392: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 May 29 01:12:37.406: INFO: ComputeCPUMemFraction for node: node1 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:12:37.406: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 May 29 01:12:37.406: INFO: ComputeCPUMemFraction for node: node2 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:12:37.406: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:12:37.406: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 May 29 01:12:37.420: INFO: Waiting for running... May 29 01:12:42.485: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:12:47.556: INFO: ComputeCPUMemFraction for node: node1 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Node: node1, totalRequestedCPUResource: 614500, cpuAllocatableMil: 77000, cpuFraction: 1 May 29 01:12:47.556: INFO: Node: node1, totalRequestedMemResource: 1429504163840, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:12:47.556: INFO: ComputeCPUMemFraction for node: node2 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Pod for on the node: f9d4c5dc-efd9-436d-94fe-2b67b93c8492-0, Cpu: 38400, Mem: 89337456640 May 29 01:12:47.556: INFO: Node: node2, totalRequestedCPUResource: 384100, cpuAllocatableMil: 77000, cpuFraction: 1 May 29 01:12:47.556: INFO: Node: node2, totalRequestedMemResource: 893479424000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8870 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8870, will wait for the garbage collector to delete the pods May 29 01:12:58.740: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 6.777669ms May 29 01:12:59.440: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.480832ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:13:20.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8870" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:103.269 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":8,"skipped":3027,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:13:20.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:13:20.607: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:13:20.616: INFO: Waiting for terminating namespaces to be deleted... May 29 01:13:20.620: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:13:20.644: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:13:20.644: INFO: Container discover ready: false, restart count 0 May 29 01:13:20.644: INFO: Container init ready: false, restart count 0 May 29 01:13:20.644: INFO: Container install ready: false, restart count 0 May 29 01:13:20.644: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:13:20.644: INFO: Container nodereport ready: true, restart count 0 May 29 01:13:20.644: INFO: Container reconcile ready: true, restart count 0 May 29 01:13:20.644: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:13:20.644: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:13:20.644: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container kube-multus ready: true, restart count 1 May 29 01:13:20.644: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:13:20.644: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:13:20.644: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:13:20.644: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:13:20.644: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:13:20.644: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:13:20.644: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:13:20.644: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:13:20.644: INFO: Container collectd ready: true, restart count 0 May 29 01:13:20.644: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:13:20.644: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:13:20.644: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:13:20.644: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:13:20.644: INFO: Container node-exporter ready: true, restart count 0 May 29 01:13:20.644: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:13:20.644: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:13:20.644: INFO: Container grafana ready: true, restart count 0 May 29 01:13:20.644: INFO: Container prometheus ready: true, restart count 1 May 29 01:13:20.644: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:13:20.644: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:13:20.644: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:13:20.644: INFO: Container tas-controller ready: true, restart count 0 May 29 01:13:20.644: INFO: Container tas-extender ready: true, restart count 0 May 29 01:13:20.644: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:13:20.654: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:13:20.654: INFO: Container nodereport ready: true, restart count 0 May 29 01:13:20.654: INFO: Container reconcile ready: true, restart count 0 May 29 01:13:20.654: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:13:20.654: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:13:20.654: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:13:20.654: INFO: Container kube-multus ready: true, restart count 1 May 29 01:13:20.654: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:13:20.654: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:13:20.654: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:13:20.654: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:13:20.654: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:13:20.654: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:13:20.654: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:13:20.654: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:13:20.654: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:13:20.654: INFO: Container collectd ready: true, restart count 0 May 29 01:13:20.654: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:13:20.654: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:13:20.654: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:13:20.654: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:13:20.654: INFO: Container node-exporter ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-468a927d-2b6b-48b6-8294-cd18b1bda0c6=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-e82171d2-3001-4e31-ac28-25e098a1e15a testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482164a832e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4184/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.1683648271356675], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.44/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482720cd753], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.1683648291783c78], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 527.124907ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482ddf36d4b], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482e3d1b45b], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168364837d372739], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168364837ec7c8e7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-468a927d-2b6b-48b6-8294-cd18b1bda0c6: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.168364837ec7c8e7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-468a927d-2b6b-48b6-8294-cd18b1bda0c6: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482164a832e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4184/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.1683648271356675], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.44/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482720cd753], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.1683648291783c78], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 527.124907ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482ddf36d4b], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16836482e3d1b45b], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.168364837d372739], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-468a927d-2b6b-48b6-8294-cd18b1bda0c6=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16836483ffc5ddbe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4184/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-e82171d2-3001-4e31-ac28-25e098a1e15a off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-e82171d2-3001-4e31-ac28-25e098a1e15a STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-468a927d-2b6b-48b6-8294-cd18b1bda0c6=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:13:29.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4184" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:9.185 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":9,"skipped":4453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:13:29.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:13:29.797: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:13:29.805: INFO: Waiting for terminating namespaces to be deleted... May 29 01:13:29.809: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:13:29.827: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:13:29.827: INFO: Container discover ready: false, restart count 0 May 29 01:13:29.827: INFO: Container init ready: false, restart count 0 May 29 01:13:29.827: INFO: Container install ready: false, restart count 0 May 29 01:13:29.827: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:13:29.827: INFO: Container nodereport ready: true, restart count 0 May 29 01:13:29.827: INFO: Container reconcile ready: true, restart count 0 May 29 01:13:29.827: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:13:29.827: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:13:29.827: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container kube-multus ready: true, restart count 1 May 29 01:13:29.827: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:13:29.827: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:13:29.827: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:13:29.827: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:13:29.827: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:13:29.827: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:13:29.827: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:13:29.827: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:13:29.827: INFO: Container collectd ready: true, restart count 0 May 29 01:13:29.827: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:13:29.827: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:13:29.827: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:13:29.827: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:13:29.827: INFO: Container node-exporter ready: true, restart count 0 May 29 01:13:29.827: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:13:29.827: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:13:29.827: INFO: Container grafana ready: true, restart count 0 May 29 01:13:29.827: INFO: Container prometheus ready: true, restart count 1 May 29 01:13:29.827: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:13:29.827: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:13:29.827: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:13:29.827: INFO: Container tas-controller ready: true, restart count 0 May 29 01:13:29.827: INFO: Container tas-extender ready: true, restart count 0 May 29 01:13:29.827: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:13:29.835: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:13:29.836: INFO: Container nodereport ready: true, restart count 0 May 29 01:13:29.836: INFO: Container reconcile ready: true, restart count 0 May 29 01:13:29.836: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:13:29.836: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container kube-multus ready: true, restart count 1 May 29 01:13:29.836: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:13:29.836: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:13:29.836: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:13:29.836: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:13:29.836: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:13:29.836: INFO: Container collectd ready: true, restart count 0 May 29 01:13:29.836: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:13:29.836: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:13:29.836: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:13:29.836: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:13:29.836: INFO: Container node-exporter ready: true, restart count 0 May 29 01:13:29.836: INFO: still-no-tolerations from sched-pred-4184 started at 2021-05-29 01:13:28 +0000 UTC (1 container statuses recorded) May 29 01:13:29.836: INFO: Container still-no-tolerations ready: false, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:13:43.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5249" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.173 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":10,"skipped":4529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:13:43.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 29 01:13:43.969: INFO: Waiting up to 1m0s for all nodes to be ready May 29 01:14:44.019: INFO: Waiting for terminating namespaces to be deleted... May 29 01:14:44.022: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 29 01:14:44.042: INFO: The status of Pod cmk-init-discover-node1-rvqxm is Succeeded, skipping waiting May 29 01:14:44.042: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 29 01:14:44.042: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 May 29 01:14:44.060: INFO: ComputeCPUMemFraction for node: node1 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:14:44.060: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 May 29 01:14:44.060: INFO: ComputeCPUMemFraction for node: node2 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28, Cpu: 200, Mem: 419430400 May 29 01:14:44.060: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 May 29 01:14:44.060: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 May 29 01:14:44.074: INFO: Waiting for running... May 29 01:14:49.140: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:14:54.210: INFO: ComputeCPUMemFraction for node: node1 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Node: node1, totalRequestedCPUResource: 614500, cpuAllocatableMil: 77000, cpuFraction: 1 May 29 01:14:54.210: INFO: Node: node1, totalRequestedMemResource: 1429504163840, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 29 01:14:54.210: INFO: ComputeCPUMemFraction for node: node2 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Pod for on the node: 0c5f1ef4-8455-4cb7-9e9f-18efcc20384d-0, Cpu: 38400, Mem: 89337456640 May 29 01:14:54.210: INFO: Node: node2, totalRequestedCPUResource: 384100, cpuAllocatableMil: 77000, cpuFraction: 1 May 29 01:14:54.210: INFO: Node: node2, totalRequestedMemResource: 893479424000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-937594b3-4ee2-48b3-b2a5-16d8867ed91b=testing-taint-value-4016fbcd-c754-4d3c-9470-82ad1487e083:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c8291e0e-16df-4328-8c10-9d02c9de09c4=testing-taint-value-1d2edad9-3ae7-4e7a-9ef8-9adb5be23375:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8667661c-8d6b-479c-86d8-3ae3958b0e13=testing-taint-value-273cf672-3df0-42bc-ba17-ddfc2e43acdf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-30a57bd8-fd9e-49f9-bdd9-2bc02dbc7510=testing-taint-value-50c35f73-c9fd-4c34-a0da-11f4948efdbb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7c067ccb-0aba-42a1-a5bf-1a7e84a1eacd=testing-taint-value-bf0d3cb6-209e-4e41-b782-4252778e3bc3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e96a272d-35fd-4491-b147-159cc49ae865=testing-taint-value-66dc1d5b-39d6-489b-a80b-3e6db2f06ccb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0a8a92b2-1508-44a5-a253-095ebec612a3=testing-taint-value-3c9a04de-88da-4363-9c07-f69aa51d6c74:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-adfc40d9-57b9-4196-9057-06504583d079=testing-taint-value-8883c59f-028d-4198-a854-49a21f03b1e9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e61cafc2-851e-4775-8be2-8768ecc7ff6b=testing-taint-value-d42f1f4e-087f-46c8-b1e2-2e5caef624b3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f94abc56-1f3c-4f73-acc8-573400e7055c=testing-taint-value-51b7b234-3df9-40d5-a904-72f342fdc531:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-07179776-1d5d-4f2e-bbe4-a234407e6081=testing-taint-value-b3151bf0-c2d2-4f56-b19e-6b8d2a1ef2c4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3a62df43-daaf-4b4f-823a-8f1ceb117101=testing-taint-value-4a37184a-3d64-4d21-8e06-cc03965dafc2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3aef7162-0c9c-4c29-b624-b8daa7238d64=testing-taint-value-6eaf0d36-88bd-454a-89dc-9cf4e7735ddd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c4282d29-d7d9-4e62-9288-519692fb2eee=testing-taint-value-1384b7b1-40aa-4621-8d5e-bbcc8027da91:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-afcbb679-05cb-4854-bc0b-d13f075e6309=testing-taint-value-943cce9b-599d-459e-b636-a34a8b10627b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-844976e3-fe16-4793-a07c-64aa11311423=testing-taint-value-66c8d3be-5e31-4028-bc20-663721ab2d27:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-104ee189-a41d-404c-b73c-58c05affca40=testing-taint-value-c0debc87-ba20-4574-9f29-c388f7d378f5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e8eef9fb-7588-4793-b5ec-30fd60a30d54=testing-taint-value-364b4c9f-7a89-457c-8a96-3f7f538a2fe2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-756a56e1-a94c-46b7-b54c-64e25ad87955=testing-taint-value-d7718e43-4ed0-434f-bee7-18050b52daf3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d85710ec-9d2e-403e-a052-028dff985e49=testing-taint-value-d3b1e90d-bb61-4c11-925f-d55414a684da:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d85710ec-9d2e-403e-a052-028dff985e49=testing-taint-value-d3b1e90d-bb61-4c11-925f-d55414a684da:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-756a56e1-a94c-46b7-b54c-64e25ad87955=testing-taint-value-d7718e43-4ed0-434f-bee7-18050b52daf3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e8eef9fb-7588-4793-b5ec-30fd60a30d54=testing-taint-value-364b4c9f-7a89-457c-8a96-3f7f538a2fe2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-104ee189-a41d-404c-b73c-58c05affca40=testing-taint-value-c0debc87-ba20-4574-9f29-c388f7d378f5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-844976e3-fe16-4793-a07c-64aa11311423=testing-taint-value-66c8d3be-5e31-4028-bc20-663721ab2d27:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-afcbb679-05cb-4854-bc0b-d13f075e6309=testing-taint-value-943cce9b-599d-459e-b636-a34a8b10627b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c4282d29-d7d9-4e62-9288-519692fb2eee=testing-taint-value-1384b7b1-40aa-4621-8d5e-bbcc8027da91:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3aef7162-0c9c-4c29-b624-b8daa7238d64=testing-taint-value-6eaf0d36-88bd-454a-89dc-9cf4e7735ddd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3a62df43-daaf-4b4f-823a-8f1ceb117101=testing-taint-value-4a37184a-3d64-4d21-8e06-cc03965dafc2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-07179776-1d5d-4f2e-bbe4-a234407e6081=testing-taint-value-b3151bf0-c2d2-4f56-b19e-6b8d2a1ef2c4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f94abc56-1f3c-4f73-acc8-573400e7055c=testing-taint-value-51b7b234-3df9-40d5-a904-72f342fdc531:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e61cafc2-851e-4775-8be2-8768ecc7ff6b=testing-taint-value-d42f1f4e-087f-46c8-b1e2-2e5caef624b3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-adfc40d9-57b9-4196-9057-06504583d079=testing-taint-value-8883c59f-028d-4198-a854-49a21f03b1e9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0a8a92b2-1508-44a5-a253-095ebec612a3=testing-taint-value-3c9a04de-88da-4363-9c07-f69aa51d6c74:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e96a272d-35fd-4491-b147-159cc49ae865=testing-taint-value-66dc1d5b-39d6-489b-a80b-3e6db2f06ccb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7c067ccb-0aba-42a1-a5bf-1a7e84a1eacd=testing-taint-value-bf0d3cb6-209e-4e41-b782-4252778e3bc3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-30a57bd8-fd9e-49f9-bdd9-2bc02dbc7510=testing-taint-value-50c35f73-c9fd-4c34-a0da-11f4948efdbb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8667661c-8d6b-479c-86d8-3ae3958b0e13=testing-taint-value-273cf672-3df0-42bc-ba17-ddfc2e43acdf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c8291e0e-16df-4328-8c10-9d02c9de09c4=testing-taint-value-1d2edad9-3ae7-4e7a-9ef8-9adb5be23375:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-937594b3-4ee2-48b3-b2a5-16d8867ed91b=testing-taint-value-4016fbcd-c754-4d3c-9470-82ad1487e083:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:15:11.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9633" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:87.618 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":11,"skipped":4744,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 29 01:15:11.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 29 01:15:11.591: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 29 01:15:11.599: INFO: Waiting for terminating namespaces to be deleted... May 29 01:15:11.601: INFO: Logging pods the apiserver thinks is on node node1 before test May 29 01:15:11.611: INFO: cmk-init-discover-node1-rvqxm from kube-system started at 2021-05-28 20:08:32 +0000 UTC (3 container statuses recorded) May 29 01:15:11.611: INFO: Container discover ready: false, restart count 0 May 29 01:15:11.611: INFO: Container init ready: false, restart count 0 May 29 01:15:11.611: INFO: Container install ready: false, restart count 0 May 29 01:15:11.611: INFO: cmk-jhzjr from kube-system started at 2021-05-28 20:09:15 +0000 UTC (2 container statuses recorded) May 29 01:15:11.611: INFO: Container nodereport ready: true, restart count 0 May 29 01:15:11.611: INFO: Container reconcile ready: true, restart count 0 May 29 01:15:11.611: INFO: cmk-webhook-6c9d5f8578-kt8bp from kube-system started at 2021-05-29 00:29:43 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container cmk-webhook ready: true, restart count 0 May 29 01:15:11.611: INFO: kube-flannel-2tjjt from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:15:11.611: INFO: kube-multus-ds-amd64-x7826 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container kube-multus ready: true, restart count 1 May 29 01:15:11.611: INFO: kube-proxy-lsngv from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:15:11.611: INFO: kubernetes-dashboard-86c6f9df5b-c5sbq from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 29 01:15:11.611: INFO: kubernetes-metrics-scraper-678c97765c-wblkm from kube-system started at 2021-05-28 19:59:33 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 29 01:15:11.611: INFO: nginx-proxy-node1 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container nginx-proxy ready: true, restart count 1 May 29 01:15:11.611: INFO: node-feature-discovery-worker-5x4qg from kube-system started at 2021-05-28 20:05:52 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:15:11.611: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zk2pt from kube-system started at 2021-05-28 20:06:47 +0000 UTC (1 container statuses recorded) May 29 01:15:11.611: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:15:11.611: INFO: collectd-qw9nd from monitoring started at 2021-05-28 20:16:29 +0000 UTC (3 container statuses recorded) May 29 01:15:11.611: INFO: Container collectd ready: true, restart count 0 May 29 01:15:11.611: INFO: Container collectd-exporter ready: true, restart count 0 May 29 01:15:11.611: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:15:11.611: INFO: node-exporter-khdpg from monitoring started at 2021-05-28 20:10:09 +0000 UTC (2 container statuses recorded) May 29 01:15:11.611: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:15:11.611: INFO: Container node-exporter ready: true, restart count 0 May 29 01:15:11.611: INFO: prometheus-k8s-0 from monitoring started at 2021-05-28 20:10:26 +0000 UTC (5 container statuses recorded) May 29 01:15:11.611: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 29 01:15:11.611: INFO: Container grafana ready: true, restart count 0 May 29 01:15:11.611: INFO: Container prometheus ready: true, restart count 1 May 29 01:15:11.611: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 29 01:15:11.611: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 29 01:15:11.611: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-6wq28 from monitoring started at 2021-05-29 00:29:43 +0000 UTC (2 container statuses recorded) May 29 01:15:11.611: INFO: Container tas-controller ready: true, restart count 0 May 29 01:15:11.611: INFO: Container tas-extender ready: true, restart count 0 May 29 01:15:11.612: INFO: with-tolerations from sched-priority-9633 started at 2021-05-29 01:14:54 +0000 UTC (1 container statuses recorded) May 29 01:15:11.612: INFO: Container with-tolerations ready: true, restart count 0 May 29 01:15:11.612: INFO: Logging pods the apiserver thinks is on node node2 before test May 29 01:15:11.618: INFO: cmk-lbg6n from kube-system started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:15:11.618: INFO: Container nodereport ready: true, restart count 0 May 29 01:15:11.618: INFO: Container reconcile ready: true, restart count 0 May 29 01:15:11.618: INFO: kube-flannel-d9wsg from kube-system started at 2021-05-28 19:59:00 +0000 UTC (1 container statuses recorded) May 29 01:15:11.618: INFO: Container kube-flannel ready: true, restart count 2 May 29 01:15:11.618: INFO: kube-multus-ds-amd64-c9cj2 from kube-system started at 2021-05-28 19:59:08 +0000 UTC (1 container statuses recorded) May 29 01:15:11.618: INFO: Container kube-multus ready: true, restart count 1 May 29 01:15:11.618: INFO: kube-proxy-z5czn from kube-system started at 2021-05-28 19:58:24 +0000 UTC (1 container statuses recorded) May 29 01:15:11.618: INFO: Container kube-proxy ready: true, restart count 2 May 29 01:15:11.618: INFO: nginx-proxy-node2 from kube-system started at 2021-05-28 20:05:21 +0000 UTC (1 container statuses recorded) May 29 01:15:11.618: INFO: Container nginx-proxy ready: true, restart count 2 May 29 01:15:11.618: INFO: node-feature-discovery-worker-2qfpd from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:15:11.618: INFO: Container nfd-worker ready: true, restart count 0 May 29 01:15:11.618: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-mkc6p from kube-system started at 2021-05-29 00:29:50 +0000 UTC (1 container statuses recorded) May 29 01:15:11.618: INFO: Container kube-sriovdp ready: true, restart count 0 May 29 01:15:11.618: INFO: collectd-k6rzg from monitoring started at 2021-05-29 00:30:20 +0000 UTC (3 container statuses recorded) May 29 01:15:11.618: INFO: Container collectd ready: true, restart count 0 May 29 01:15:11.618: INFO: Container collectd-exporter ready: false, restart count 0 May 29 01:15:11.618: INFO: Container rbac-proxy ready: true, restart count 0 May 29 01:15:11.618: INFO: node-exporter-nsrbd from monitoring started at 2021-05-29 00:29:50 +0000 UTC (2 container statuses recorded) May 29 01:15:11.618: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 29 01:15:11.618: INFO: Container node-exporter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649cddf5daab], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649ec8a2aee0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3027/filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649f1dd83849], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.51/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649f1e8c9770], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649f3b905770], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 486.777404ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649f421906be], Reason = [Created], Message = [Created container filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a] STEP: Considering event: Type = [Normal], Name = [filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a.1683649f4808ad31], Reason = [Started], Message = [Started container filler-pod-01a5dde3-f76a-415f-a360-e1aff8e61d0a] STEP: Considering event: Type = [Normal], Name = [without-label.1683649bed549a6a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3027/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.1683649c4060168c], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.50/24]] STEP: Considering event: Type = [Normal], Name = [without-label.1683649c41134348], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-label.1683649c5d25dfa6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 470.96932ms] STEP: Considering event: Type = [Normal], Name = [without-label.1683649c639d465d], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.1683649c6958edef], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.1683649cdd3997f0], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-poddcf77dfc-8e47-45f6-82b0-2d79b87d8d08.1683649faa9d6395], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 29 01:15:28.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3027" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.168 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":12,"skipped":4836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 29 01:15:28.748: INFO: Running AfterSuite actions on all nodes May 29 01:15:28.748: INFO: Running AfterSuite actions on node 1 May 29 01:15:28.748: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 552.569 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 9m13.795223736s Test Suite Passed