I0512 21:17:14.446984 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0512 21:17:14.447148 22 e2e.go:129] Starting e2e run "d08e51b1-1cb4-4fd8-9501-fdd05a6aace7" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1620854233 - Will randomize all specs Will run 12 of 5484 specs May 12 21:17:14.461: INFO: >>> kubeConfig: /root/.kube/config May 12 21:17:14.466: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 21:17:14.498: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 21:17:14.558: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 21:17:14.558: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 21:17:14.558: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 12 21:17:14.558: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 21:17:14.576: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 12 21:17:14.576: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 12 21:17:14.576: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 12 21:17:14.576: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 12 21:17:14.576: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 12 21:17:14.576: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 12 21:17:14.576: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 12 21:17:14.576: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 21:17:14.576: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 12 21:17:14.576: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 12 21:17:14.576: INFO: e2e test version: v1.19.10 May 12 21:17:14.577: INFO: kube-apiserver version: v1.19.8 May 12 21:17:14.577: INFO: >>> kubeConfig: /root/.kube/config May 12 21:17:14.581: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:17:14.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 12 21:17:14.606: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 12 21:17:14.609: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:17:14.612: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:17:14.627: INFO: Waiting for terminating namespaces to be deleted... May 12 21:17:14.633: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:17:14.650: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:17:14.650: INFO: Container discover ready: false, restart count 0 May 12 21:17:14.650: INFO: Container init ready: false, restart count 0 May 12 21:17:14.650: INFO: Container install ready: false, restart count 0 May 12 21:17:14.650: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:17:14.650: INFO: Container nodereport ready: true, restart count 0 May 12 21:17:14.650: INFO: Container reconcile ready: true, restart count 0 May 12 21:17:14.650: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:17:14.650: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:17:14.650: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container kube-multus ready: true, restart count 1 May 12 21:17:14.650: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:17:14.650: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:17:14.650: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:17:14.650: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:17:14.650: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:17:14.650: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:17:14.650: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:17:14.650: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:17:14.650: INFO: Container collectd ready: true, restart count 0 May 12 21:17:14.650: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:17:14.650: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:17:14.650: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:17:14.650: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:17:14.650: INFO: Container node-exporter ready: true, restart count 0 May 12 21:17:14.650: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:17:14.650: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:17:14.650: INFO: Container grafana ready: true, restart count 0 May 12 21:17:14.650: INFO: Container prometheus ready: true, restart count 1 May 12 21:17:14.650: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:17:14.650: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:17:14.650: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:17:14.650: INFO: Container tas-controller ready: true, restart count 0 May 12 21:17:14.650: INFO: Container tas-extender ready: true, restart count 0 May 12 21:17:14.650: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:17:14.656: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:17:14.656: INFO: Container nodereport ready: true, restart count 0 May 12 21:17:14.656: INFO: Container reconcile ready: true, restart count 0 May 12 21:17:14.656: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:17:14.656: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:17:14.656: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:17:14.656: INFO: Container kube-multus ready: true, restart count 1 May 12 21:17:14.656: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:17:14.656: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:17:14.656: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:17:14.656: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:17:14.656: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:17:14.656: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:17:14.656: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:17:14.656: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:17:14.656: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:17:14.656: INFO: Container collectd ready: true, restart count 0 May 12 21:17:14.656: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:17:14.656: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:17:14.656: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:17:14.656: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:17:14.656: INFO: Container node-exporter ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.167e6e56bbc4ff16], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.167e6e56bc18c009], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:17:15.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2297" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":1,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:17:15.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 12 21:17:15.732: INFO: Waiting up to 1m0s for all nodes to be ready May 12 21:18:15.783: INFO: Waiting for terminating namespaces to be deleted... May 12 21:18:15.785: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 21:18:15.802: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 21:18:15.802: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 21:18:15.802: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 May 12 21:18:15.803: INFO: ComputeCPUMemFraction for node: node1 May 12 21:18:15.819: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:18:15.819: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:18:15.819: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:18:15.819: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:18:15.819: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:18:15.819: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:18:15.819: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:18:15.819: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:18:15.819: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:18:15.819: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:18:15.819: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:18:15.819: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:18:15.819: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:18:15.819: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:18:15.819: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:18:15.819: INFO: Node: node1, totalRequestedCPUResource: 1037, cpuAllocatableMil: 77000, cpuFraction: 0.013467532467532467 May 12 21:18:15.819: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884632576, memFraction: 0.009921517653261606 May 12 21:18:15.819: INFO: ComputeCPUMemFraction for node: node2 May 12 21:18:15.835: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:18:15.835: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:18:15.835: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:18:15.835: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:18:15.835: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:18:15.835: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:18:15.835: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:18:15.835: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:18:15.835: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:18:15.835: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 12 21:18:15.835: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 12 21:18:15.851: INFO: Waiting for running... May 12 21:18:20.916: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:18:25.967: INFO: ComputeCPUMemFraction for node: node1 May 12 21:18:25.985: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:18:25.985: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:18:25.985: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:18:25.985: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:18:25.985: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:18:25.985: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:18:25.985: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:18:25.985: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:18:25.985: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:18:25.985: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:18:25.985: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:18:25.985: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:18:25.985: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:18:25.985: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:18:25.985: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:18:25.985: INFO: Pod for on the node: 53cd94a4-5023-4f9a-a795-39a279bd34f3-0, Cpu: 37463, Mem: 87667509248 May 12 21:18:25.985: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 12 21:18:25.985: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:18:25.985: INFO: ComputeCPUMemFraction for node: node2 May 12 21:18:25.999: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:18:25.999: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:18:25.999: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:18:25.999: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:18:26.000: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:18:26.000: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:18:26.000: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:18:26.000: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:18:26.000: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:18:26.000: INFO: Pod for on the node: aab55ca7-bdec-4413-a879-45340ace13e3-0, Cpu: 38013, Mem: 88937371648 May 12 21:18:26.000: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 12 21:18:26.000: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-25000e65-27cc-4d87-bdac-68364f058ce6=testing-taint-value-32e94571-27a1-4e71-bc91-1e0504b43877:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4116898a-887c-4384-bb63-1d83fabf6a4f=testing-taint-value-59ac4a35-8e06-48e4-aab8-fdadbccc3edc:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6608c250-f22e-4a0f-bb0f-98bff5086348=testing-taint-value-3955d047-40aa-4366-b91f-6537fb300aef:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-71cc5cc5-41ff-4d8a-97aa-75a2baf60a1b=testing-taint-value-d637171b-fb76-43a9-8270-ab4e5873d677:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6281a9ce-2a41-4727-be5c-798152e2c495=testing-taint-value-e048a58d-a430-4502-98b1-1e0634fe1866:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-eaf606f7-241d-4297-b739-8820ea2cb1a4=testing-taint-value-998e0d7c-aae0-4089-83d0-1a4449053534:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7318470c-14c6-47a7-b2f6-931db309dd1b=testing-taint-value-f496813b-bf1f-4a56-9475-99f1ab69c107:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f320c028-a64d-48a0-8d90-1e2816618688=testing-taint-value-f6ba1b14-bcfa-42fa-b440-5fdc6ad7f03c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e0f8dba8-3699-4cc8-9d3f-120b89ead7e3=testing-taint-value-de1be0e6-14d3-4c29-ae30-31372e40d265:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-af91c2d1-e6a7-49ae-8de7-e74f7a491742=testing-taint-value-4ebe0bdb-2522-4546-966f-0ff3cec87c94:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b3bc7149-389b-4ae1-897d-111e829ff85e=testing-taint-value-643ccc88-1a1b-4cbd-964b-32c707277237:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-a22d0c55-04e4-4fb9-ac34-7233d9a4ae9f=testing-taint-value-e3c452d0-7598-446f-866a-101e93e53bf4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9b1e1949-07e7-44a7-9acd-5729a7224441=testing-taint-value-31a55cc3-3c74-4d23-a0d5-a2fb9b71d5ea:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-44e8ff47-0fd1-42c2-8f89-9d057f2957d5=testing-taint-value-429396dd-2050-4929-b22d-2187d2681354:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-232c6e59-e6c5-4ca0-8b8d-62fcf77fed79=testing-taint-value-cfb85c1c-40fd-4cc6-8bbf-fb578274b151:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1de99a84-c8cd-4060-83ea-9de413d158a1=testing-taint-value-a0dac16f-ef81-48c5-b525-09a279d26033:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b92e2d71-eeca-40ed-a2a8-65c580e910fe=testing-taint-value-247f4708-95ac-4e6c-a7ff-a0351735c813:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-21082ebd-5a6e-4489-8209-28ebdd929ba9=testing-taint-value-cbeda02b-05be-4a5b-80eb-154e81ea1d80:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4e20786d-2d5b-442a-94c7-edcac133138b=testing-taint-value-cfd3eda0-4b4d-408a-808c-840909c55226:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0fcfd468-94a2-4c82-8e62-a82eafe2cb24=testing-taint-value-5d6363b2-5fcb-4e28-b431-0669ab1312f6:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0fcfd468-94a2-4c82-8e62-a82eafe2cb24=testing-taint-value-5d6363b2-5fcb-4e28-b431-0669ab1312f6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4e20786d-2d5b-442a-94c7-edcac133138b=testing-taint-value-cfd3eda0-4b4d-408a-808c-840909c55226:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-21082ebd-5a6e-4489-8209-28ebdd929ba9=testing-taint-value-cbeda02b-05be-4a5b-80eb-154e81ea1d80:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b92e2d71-eeca-40ed-a2a8-65c580e910fe=testing-taint-value-247f4708-95ac-4e6c-a7ff-a0351735c813:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1de99a84-c8cd-4060-83ea-9de413d158a1=testing-taint-value-a0dac16f-ef81-48c5-b525-09a279d26033:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-232c6e59-e6c5-4ca0-8b8d-62fcf77fed79=testing-taint-value-cfb85c1c-40fd-4cc6-8bbf-fb578274b151:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-44e8ff47-0fd1-42c2-8f89-9d057f2957d5=testing-taint-value-429396dd-2050-4929-b22d-2187d2681354:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9b1e1949-07e7-44a7-9acd-5729a7224441=testing-taint-value-31a55cc3-3c74-4d23-a0d5-a2fb9b71d5ea:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-a22d0c55-04e4-4fb9-ac34-7233d9a4ae9f=testing-taint-value-e3c452d0-7598-446f-866a-101e93e53bf4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b3bc7149-389b-4ae1-897d-111e829ff85e=testing-taint-value-643ccc88-1a1b-4cbd-964b-32c707277237:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-af91c2d1-e6a7-49ae-8de7-e74f7a491742=testing-taint-value-4ebe0bdb-2522-4546-966f-0ff3cec87c94:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e0f8dba8-3699-4cc8-9d3f-120b89ead7e3=testing-taint-value-de1be0e6-14d3-4c29-ae30-31372e40d265:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f320c028-a64d-48a0-8d90-1e2816618688=testing-taint-value-f6ba1b14-bcfa-42fa-b440-5fdc6ad7f03c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7318470c-14c6-47a7-b2f6-931db309dd1b=testing-taint-value-f496813b-bf1f-4a56-9475-99f1ab69c107:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-eaf606f7-241d-4297-b739-8820ea2cb1a4=testing-taint-value-998e0d7c-aae0-4089-83d0-1a4449053534:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6281a9ce-2a41-4727-be5c-798152e2c495=testing-taint-value-e048a58d-a430-4502-98b1-1e0634fe1866:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-71cc5cc5-41ff-4d8a-97aa-75a2baf60a1b=testing-taint-value-d637171b-fb76-43a9-8270-ab4e5873d677:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6608c250-f22e-4a0f-bb0f-98bff5086348=testing-taint-value-3955d047-40aa-4366-b91f-6537fb300aef:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4116898a-887c-4384-bb63-1d83fabf6a4f=testing-taint-value-59ac4a35-8e06-48e4-aab8-fdadbccc3edc:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-25000e65-27cc-4d87-bdac-68364f058ce6=testing-taint-value-32e94571-27a1-4e71-bc91-1e0504b43877:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:18:45.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6925" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:89.676 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":2,"skipped":923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:18:45.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 12 21:18:45.422: INFO: Waiting up to 1m0s for all nodes to be ready May 12 21:19:45.481: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:20:27.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-714" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:102.378 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":3,"skipped":1147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:20:27.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 12 21:20:27.795: INFO: Waiting up to 1m0s for all nodes to be ready May 12 21:21:27.850: INFO: Waiting for terminating namespaces to be deleted... May 12 21:21:27.852: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 21:21:27.869: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 21:21:27.869: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 21:21:27.869: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 12 21:21:31.902: INFO: ComputeCPUMemFraction for node: node1 May 12 21:21:31.916: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:21:31.916: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:21:31.916: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:21:31.916: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:21:31.916: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:21:31.916: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:21:31.916: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:21:31.916: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:21:31.916: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:21:31.916: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:21:31.916: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:21:31.916: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:21:31.916: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:21:31.916: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:21:31.916: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:21:31.917: INFO: Node: node1, totalRequestedCPUResource: 1037, cpuAllocatableMil: 77000, cpuFraction: 0.013467532467532467 May 12 21:21:31.917: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884632576, memFraction: 0.009921517653261606 May 12 21:21:31.917: INFO: ComputeCPUMemFraction for node: node2 May 12 21:21:31.933: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:21:31.933: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:21:31.933: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:21:31.933: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:21:31.933: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:21:31.933: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:21:31.933: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:21:31.933: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:21:31.933: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:21:31.933: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 12 21:21:31.933: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 12 21:21:31.933: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 12 21:21:31.945: INFO: Waiting for running... May 12 21:21:37.014: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:21:42.067: INFO: ComputeCPUMemFraction for node: node1 May 12 21:21:42.084: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:21:42.084: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:21:42.084: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:21:42.084: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:21:42.084: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:21:42.084: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:21:42.084: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:21:42.084: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:21:42.084: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:21:42.084: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:21:42.084: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:21:42.084: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:21:42.084: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:21:42.084: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:21:42.084: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:21:42.084: INFO: Pod for on the node: bb57bd69-ec99-4420-9cd4-d7a30c9245c9-0, Cpu: 45162, Mem: 105555972505 May 12 21:21:42.084: INFO: Node: node1, totalRequestedCPUResource: 46199, cpuAllocatableMil: 77000, cpuFraction: 0.599987012987013 May 12 21:21:42.084: INFO: Node: node1, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:21:42.084: INFO: ComputeCPUMemFraction for node: node2 May 12 21:21:42.101: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:21:42.101: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:21:42.101: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:21:42.101: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:21:42.101: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:21:42.101: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:21:42.101: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:21:42.101: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:21:42.101: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:21:42.101: INFO: Pod for on the node: 6d6f8bb3-c964-4500-a92a-67d9c181baca-0, Cpu: 45713, Mem: 106825834905 May 12 21:21:42.101: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 12 21:21:42.101: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 12 21:21:42.101: INFO: Node: node2, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:21:54.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1466" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:86.384 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":4,"skipped":1368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:21:54.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:21:54.187: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:21:54.195: INFO: Waiting for terminating namespaces to be deleted... May 12 21:21:54.197: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:21:54.222: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:21:54.222: INFO: Container discover ready: false, restart count 0 May 12 21:21:54.222: INFO: Container init ready: false, restart count 0 May 12 21:21:54.222: INFO: Container install ready: false, restart count 0 May 12 21:21:54.222: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:21:54.222: INFO: Container nodereport ready: true, restart count 0 May 12 21:21:54.222: INFO: Container reconcile ready: true, restart count 0 May 12 21:21:54.222: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:21:54.222: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:21:54.222: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container kube-multus ready: true, restart count 1 May 12 21:21:54.222: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:21:54.222: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:21:54.222: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:21:54.222: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:21:54.222: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:21:54.222: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:21:54.222: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:21:54.222: INFO: Container collectd ready: true, restart count 0 May 12 21:21:54.222: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:21:54.222: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:21:54.222: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:21:54.222: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:21:54.222: INFO: Container node-exporter ready: true, restart count 0 May 12 21:21:54.222: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:21:54.222: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:21:54.222: INFO: Container grafana ready: true, restart count 0 May 12 21:21:54.222: INFO: Container prometheus ready: true, restart count 1 May 12 21:21:54.222: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:21:54.222: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:21:54.222: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:21:54.222: INFO: Container tas-controller ready: true, restart count 0 May 12 21:21:54.222: INFO: Container tas-extender ready: true, restart count 0 May 12 21:21:54.222: INFO: pod-with-pod-antiaffinity from sched-priority-1466 started at 2021-05-12 21:21:42 +0000 UTC (1 container statuses recorded) May 12 21:21:54.222: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 May 12 21:21:54.222: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:21:54.229: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:21:54.229: INFO: Container nodereport ready: true, restart count 0 May 12 21:21:54.229: INFO: Container reconcile ready: true, restart count 0 May 12 21:21:54.229: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:21:54.229: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container kube-multus ready: true, restart count 1 May 12 21:21:54.229: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:21:54.229: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:21:54.229: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:21:54.229: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:21:54.229: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:21:54.229: INFO: Container collectd ready: true, restart count 0 May 12 21:21:54.229: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:21:54.229: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:21:54.229: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:21:54.229: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:21:54.229: INFO: Container node-exporter ready: true, restart count 0 May 12 21:21:54.229: INFO: pod-with-label-security-s1 from sched-priority-1466 started at 2021-05-12 21:21:27 +0000 UTC (1 container statuses recorded) May 12 21:21:54.229: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:22:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5663" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":5,"skipped":1932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:22:06.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:22:06.378: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:22:06.386: INFO: Waiting for terminating namespaces to be deleted... May 12 21:22:06.388: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:22:06.399: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:22:06.399: INFO: Container discover ready: false, restart count 0 May 12 21:22:06.399: INFO: Container init ready: false, restart count 0 May 12 21:22:06.399: INFO: Container install ready: false, restart count 0 May 12 21:22:06.399: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:22:06.399: INFO: Container nodereport ready: true, restart count 0 May 12 21:22:06.399: INFO: Container reconcile ready: true, restart count 0 May 12 21:22:06.399: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:22:06.399: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:22:06.399: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container kube-multus ready: true, restart count 1 May 12 21:22:06.399: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:22:06.399: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:22:06.399: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:22:06.399: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:22:06.399: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:22:06.399: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:22:06.399: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:22:06.399: INFO: Container collectd ready: true, restart count 0 May 12 21:22:06.399: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:22:06.399: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:22:06.399: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:22:06.399: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:22:06.399: INFO: Container node-exporter ready: true, restart count 0 May 12 21:22:06.399: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:22:06.399: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:22:06.399: INFO: Container grafana ready: true, restart count 0 May 12 21:22:06.399: INFO: Container prometheus ready: true, restart count 1 May 12 21:22:06.399: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:22:06.399: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:22:06.399: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:22:06.399: INFO: Container tas-controller ready: true, restart count 0 May 12 21:22:06.399: INFO: Container tas-extender ready: true, restart count 0 May 12 21:22:06.399: INFO: rs-e2e-pts-filter-s88pn from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container e2e-pts-filter ready: true, restart count 0 May 12 21:22:06.399: INFO: rs-e2e-pts-filter-t77cj from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:06.399: INFO: Container e2e-pts-filter ready: true, restart count 0 May 12 21:22:06.399: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:22:06.418: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:22:06.418: INFO: Container nodereport ready: true, restart count 0 May 12 21:22:06.418: INFO: Container reconcile ready: true, restart count 0 May 12 21:22:06.418: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:22:06.418: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container kube-multus ready: true, restart count 1 May 12 21:22:06.418: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:22:06.418: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:22:06.418: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:22:06.418: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:22:06.418: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:22:06.418: INFO: Container collectd ready: true, restart count 0 May 12 21:22:06.418: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:22:06.418: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:22:06.418: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:22:06.418: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:22:06.418: INFO: Container node-exporter ready: true, restart count 0 May 12 21:22:06.418: INFO: rs-e2e-pts-filter-2hk7g from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container e2e-pts-filter ready: true, restart count 0 May 12 21:22:06.418: INFO: rs-e2e-pts-filter-wjgh7 from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container e2e-pts-filter ready: true, restart count 0 May 12 21:22:06.418: INFO: pod-with-label-security-s1 from sched-priority-1466 started at 2021-05-12 21:21:27 +0000 UTC (1 container statuses recorded) May 12 21:22:06.418: INFO: Container pod-with-label-security-s1 ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9b9a25f11b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9b9a748184], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9d5ebff694], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5008/filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9db3bace7c], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.242/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9db47e7b6e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9dd24abc08], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 499.917268ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9dd974db37], Reason = [Created], Message = [Created container filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1] STEP: Considering event: Type = [Normal], Name = [filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1.167e6e9ddfc786c7], Reason = [Started], Message = [Started container filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9aa9a3fff1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5008/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9afb3a2d39], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.241/24]] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9afbf58f74], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9b19870db0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 496.064674ms] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9b20172383], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9b26329121], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.167e6e9b99d7456a], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-b4d2d" : object "sched-pred-5008"/"default-token-b4d2d" not registered] STEP: Considering event: Type = [Normal], Name = [without-label.167e6e9b99ebe4d4], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod6d8c322b-d44b-4169-8813-46ab89905967.167e6e9e66c7cae8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [additional-pod6d8c322b-d44b-4169-8813-46ab89905967.167e6e9e671d12b0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:22:23.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5008" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":6,"skipped":2611,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:22:23.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:22:23.550: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:22:23.558: INFO: Waiting for terminating namespaces to be deleted... May 12 21:22:23.560: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:22:23.578: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:22:23.578: INFO: Container discover ready: false, restart count 0 May 12 21:22:23.578: INFO: Container init ready: false, restart count 0 May 12 21:22:23.578: INFO: Container install ready: false, restart count 0 May 12 21:22:23.578: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:22:23.578: INFO: Container nodereport ready: true, restart count 0 May 12 21:22:23.578: INFO: Container reconcile ready: true, restart count 0 May 12 21:22:23.578: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:22:23.578: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:22:23.578: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container kube-multus ready: true, restart count 1 May 12 21:22:23.578: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:22:23.578: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:22:23.578: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:22:23.578: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:22:23.578: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:22:23.578: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:22:23.578: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:22:23.578: INFO: Container collectd ready: true, restart count 0 May 12 21:22:23.578: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:22:23.578: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:22:23.578: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:22:23.578: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:22:23.578: INFO: Container node-exporter ready: true, restart count 0 May 12 21:22:23.578: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:22:23.578: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:22:23.578: INFO: Container grafana ready: true, restart count 0 May 12 21:22:23.578: INFO: Container prometheus ready: true, restart count 1 May 12 21:22:23.578: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:22:23.578: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:22:23.578: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:22:23.578: INFO: Container tas-controller ready: true, restart count 0 May 12 21:22:23.578: INFO: Container tas-extender ready: true, restart count 0 May 12 21:22:23.578: INFO: rs-e2e-pts-filter-s88pn from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container e2e-pts-filter ready: false, restart count 0 May 12 21:22:23.578: INFO: rs-e2e-pts-filter-t77cj from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:23.578: INFO: Container e2e-pts-filter ready: false, restart count 0 May 12 21:22:23.578: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:22:23.587: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:22:23.587: INFO: Container nodereport ready: true, restart count 0 May 12 21:22:23.587: INFO: Container reconcile ready: true, restart count 0 May 12 21:22:23.587: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:22:23.587: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:22:23.587: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:22:23.587: INFO: Container kube-multus ready: true, restart count 1 May 12 21:22:23.587: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:22:23.587: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:22:23.587: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:22:23.587: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:22:23.588: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:22:23.588: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:22:23.588: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:22:23.588: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:22:23.588: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:22:23.588: INFO: Container collectd ready: true, restart count 0 May 12 21:22:23.588: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:22:23.588: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:22:23.588: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:22:23.588: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:22:23.588: INFO: Container node-exporter ready: true, restart count 0 May 12 21:22:23.588: INFO: filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1 from sched-pred-5008 started at 2021-05-12 21:22:18 +0000 UTC (1 container statuses recorded) May 12 21:22:23.588: INFO: Container filler-pod-af196c6d-f8d1-42bb-b572-d8a29f566fe1 ready: true, restart count 0 May 12 21:22:23.588: INFO: rs-e2e-pts-filter-2hk7g from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:23.588: INFO: Container e2e-pts-filter ready: false, restart count 0 May 12 21:22:23.588: INFO: rs-e2e-pts-filter-wjgh7 from sched-pred-5663 started at 2021-05-12 21:22:02 +0000 UTC (1 container statuses recorded) May 12 21:22:23.588: INFO: Container e2e-pts-filter ready: false, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 12 21:22:29.673: INFO: Pod cmk-5b8cg requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod cmk-v4qwz requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod cmk-webhook-6c9d5f8578-mwvqc requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod kube-flannel-r7w6z requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod kube-flannel-rqtcs requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod kube-multus-ds-amd64-fhzwc requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod kube-multus-ds-amd64-k28rf requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod kube-proxy-grtqc requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod kube-proxy-r9vsx requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod kubernetes-dashboard-86c6f9df5b-vkvbq requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod kubernetes-metrics-scraper-678c97765c-s4sgj requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod node-feature-discovery-worker-qtn84 requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod node-feature-discovery-worker-x5q8m requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-46jff requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod collectd-5mpmz requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod collectd-w6fng requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod node-exporter-ddxbd requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod node-exporter-nnf86 requesting local ephemeral resource =0 on Node node2 May 12 21:22:29.673: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 requesting local ephemeral resource =0 on Node node1 May 12 21:22:29.673: INFO: Using pod capacity: 40542413347 May 12 21:22:29.673: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 May 12 21:22:29.673: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 12 21:22:29.870: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.167e6ea012dcf1c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167e6ea122e7892b], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.244/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167e6ea123bc3e14], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167e6ea156841154], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 851.947997ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167e6ea18b51115f], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167e6ea1fb830d8b], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167e6ea0135de7c9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167e6ea126b930d0], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.245/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167e6ea12771a88d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167e6ea1838397a4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.544670074s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167e6ea1958cc1d5], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167e6ea213192b19], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167e6ea01895fa8a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167e6ea2123b2f1a], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.252/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167e6ea213b36765], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167e6ea2b45537c8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.694951516s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167e6ea2baae52c4], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167e6ea2c01277cf], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167e6ea019251843], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167e6ea247256ed8], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.118/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167e6ea24b4b49fe], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167e6ea2f73158d7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.883973578s] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167e6ea2fecbdc90], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167e6ea305950918], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167e6ea019c94b5d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167e6ea10e78e32b], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.111/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167e6ea11207d9a1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167e6ea13427729f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 572.484055ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167e6ea152c4d91b], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167e6ea198a5933f], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167e6ea01a5d5476], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167e6ea232b41794], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.116/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167e6ea246d03cdc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167e6ea2818cac94], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 985.420999ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167e6ea2894baa3a], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167e6ea28f2d2350], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167e6ea01ae31539], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167e6ea2329f17f6], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.117/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167e6ea246d0709a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167e6ea2d9653ae6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.459217072s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167e6ea2ec87c1ff], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167e6ea2f3062e0b], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167e6ea01b765cac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167e6ea24cc721f1], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.119/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167e6ea24d624e64], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167e6ea333e38c0b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.867218558s] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167e6ea33bacdfb8], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167e6ea341fd49be], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167e6ea01c0ab45d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167e6ea24732ca22], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.115/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167e6ea24b5e42a4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167e6ea3170227f3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.41650776s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167e6ea3213da9f6], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167e6ea328ccc5e2], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167e6ea01c9c222d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167e6ea19d5e5768], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.113/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167e6ea1def94ff9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167e6ea221b0959e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.119297742s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167e6ea23d06440b], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167e6ea24f8a0783], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167e6ea01d2f7841], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167e6ea22f7a5177], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.114/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167e6ea246ca0130], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167e6ea26688f382], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 532.599536ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167e6ea26de3b697], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167e6ea274068c04], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167e6ea01dc6832e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167e6ea156243cb8], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.112/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167e6ea17dea515a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167e6ea19c7f55b6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 513.073477ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167e6ea1a9b5145f], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167e6ea20b229ec9], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167e6ea013e8551d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167e6ea1faef2c49], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.250/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167e6ea20f2101c6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167e6ea295d3f888], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.259869208s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167e6ea29cc8e4b4], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167e6ea2a2a07372], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167e6ea0147d4cd2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167e6ea0c3cc439d], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.110/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167e6ea0c4c7fb3d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167e6ea0e26b8386], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 497.247089ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167e6ea11923a952], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167e6ea150207ea1], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167e6ea0150e0b0f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167e6ea0ec108e6d], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.243/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167e6ea10cf5f644], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167e6ea12905ff95], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 470.803745ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167e6ea14781e91e], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167e6ea19357a54b], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167e6ea015a52473], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167e6ea1ec6ffe2b], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.249/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167e6ea1fa83b261], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167e6ea25ca59288], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.646381691s] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167e6ea263de6649], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167e6ea26a068a30], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167e6ea0164af3a4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167e6ea1cd6e9bd9], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.248/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167e6ea1e6cf0e9a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167e6ea205524b0c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 511.909588ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167e6ea21557cd02], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167e6ea21ae3150f], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167e6ea016d55c7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167e6ea1e8558e76], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.246/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167e6ea1eb2181d5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167e6ea225105be8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 971.947516ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167e6ea22b0a4361], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167e6ea23093e81a], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167e6ea017821c79], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167e6ea1eb5210af], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.247/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167e6ea1fa78e38d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167e6ea24038c616], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.170195908s] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167e6ea246c8a50f], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167e6ea24d148a28], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167e6ea0180a4413], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4778/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167e6ea1fae337af], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.251/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167e6ea20f1f57ec], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167e6ea278f0f27b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.775336509s] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167e6ea2800a83b3], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167e6ea2859fc295], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.167e6ea3a01e5111], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [additional-pod.167e6ea3a074b5d9], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:22:45.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4778" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:22.434 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":7,"skipped":2623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:22:45.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:22:45.989: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:22:45.997: INFO: Waiting for terminating namespaces to be deleted... May 12 21:22:45.999: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:22:46.025: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:22:46.025: INFO: Container discover ready: false, restart count 0 May 12 21:22:46.025: INFO: Container init ready: false, restart count 0 May 12 21:22:46.025: INFO: Container install ready: false, restart count 0 May 12 21:22:46.025: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:22:46.025: INFO: Container nodereport ready: true, restart count 0 May 12 21:22:46.025: INFO: Container reconcile ready: true, restart count 0 May 12 21:22:46.025: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:22:46.025: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:22:46.025: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container kube-multus ready: true, restart count 1 May 12 21:22:46.025: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:22:46.025: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:22:46.025: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:22:46.025: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:22:46.025: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:22:46.025: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:22:46.025: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:22:46.025: INFO: Container collectd ready: true, restart count 0 May 12 21:22:46.025: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:22:46.025: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:22:46.025: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:22:46.025: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:22:46.025: INFO: Container node-exporter ready: true, restart count 0 May 12 21:22:46.025: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:22:46.025: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:22:46.025: INFO: Container grafana ready: true, restart count 0 May 12 21:22:46.025: INFO: Container prometheus ready: true, restart count 1 May 12 21:22:46.025: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:22:46.025: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:22:46.025: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:22:46.025: INFO: Container tas-controller ready: true, restart count 0 May 12 21:22:46.025: INFO: Container tas-extender ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-11 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-11 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-12 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-12 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-13 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-13 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-14 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-14 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-15 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-15 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-16 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-16 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-17 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-17 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-18 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-18 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-19 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-19 ready: true, restart count 0 May 12 21:22:46.025: INFO: overcommit-3 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.025: INFO: Container overcommit-3 ready: true, restart count 0 May 12 21:22:46.025: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:22:46.034: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:22:46.034: INFO: Container nodereport ready: true, restart count 0 May 12 21:22:46.034: INFO: Container reconcile ready: true, restart count 0 May 12 21:22:46.034: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:22:46.034: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container kube-multus ready: true, restart count 1 May 12 21:22:46.034: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:22:46.034: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:22:46.034: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:22:46.034: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:22:46.034: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:22:46.034: INFO: Container collectd ready: true, restart count 0 May 12 21:22:46.034: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:22:46.034: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:22:46.034: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:22:46.034: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:22:46.034: INFO: Container node-exporter ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-0 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-0 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-1 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-1 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-10 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-10 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-2 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-2 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-4 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-4 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-5 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-5 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-6 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-6 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-7 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-7 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-8 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-8 ready: true, restart count 0 May 12 21:22:46.034: INFO: overcommit-9 from sched-pred-4778 started at 2021-05-12 21:22:29 +0000 UTC (1 container statuses recorded) May 12 21:22:46.034: INFO: Container overcommit-9 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-06ebddeb-5996-4b5e-8419-57d46a632cfb=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-744b333a-5fcd-4515-bcf7-232209963e0e testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-744b333a-5fcd-4515-bcf7-232209963e0e off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-744b333a-5fcd-4515-bcf7-232209963e0e STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-06ebddeb-5996-4b5e-8419-57d46a632cfb=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:22:58.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6733" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":8,"skipped":2785,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:22:58.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 12 21:22:58.173: INFO: Waiting up to 1m0s for all nodes to be ready May 12 21:23:58.223: INFO: Waiting for terminating namespaces to be deleted... May 12 21:23:58.224: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 21:23:58.249: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 21:23:58.249: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 21:23:58.249: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 May 12 21:24:06.326: INFO: ComputeCPUMemFraction for node: node2 May 12 21:24:06.339: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:24:06.339: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:24:06.339: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:24:06.339: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:24:06.339: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:24:06.339: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:24:06.339: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:24:06.339: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:24:06.339: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:24:06.340: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 12 21:24:06.340: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 12 21:24:06.340: INFO: ComputeCPUMemFraction for node: node1 May 12 21:24:06.355: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:24:06.355: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:24:06.355: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:24:06.355: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:24:06.355: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:24:06.355: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:24:06.355: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:24:06.355: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:24:06.355: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:24:06.355: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:24:06.355: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:24:06.355: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:24:06.355: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:24:06.355: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:24:06.355: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:24:06.355: INFO: Node: node1, totalRequestedCPUResource: 1037, cpuAllocatableMil: 77000, cpuFraction: 0.013467532467532467 May 12 21:24:06.355: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884632576, memFraction: 0.009921517653261606 May 12 21:24:06.367: INFO: Waiting for running... May 12 21:24:11.433: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:24:16.486: INFO: ComputeCPUMemFraction for node: node2 May 12 21:24:16.505: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:24:16.505: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:24:16.505: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:24:16.505: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:24:16.505: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:24:16.505: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:24:16.505: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:24:16.505: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:24:16.505: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:24:16.505: INFO: Pod for on the node: f8a9ff95-bffe-4a57-bc2d-7db51d45fd0c-0, Cpu: 38013, Mem: 88937371648 May 12 21:24:16.505: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 12 21:24:16.505: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:24:16.505: INFO: ComputeCPUMemFraction for node: node1 May 12 21:24:16.521: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:24:16.521: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:24:16.521: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:24:16.521: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:24:16.521: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:24:16.522: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:24:16.522: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:24:16.522: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:24:16.522: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:24:16.522: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:24:16.522: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:24:16.522: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:24:16.522: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:24:16.522: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:24:16.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:24:16.522: INFO: Pod for on the node: 00a58c7d-256f-442a-99b1-b0c2910dcd87-0, Cpu: 37463, Mem: 87667509248 May 12 21:24:16.522: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 12 21:24:16.522: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:24:34.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4731" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:96.446 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":9,"skipped":3279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:24:34.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:24:34.633: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:24:34.642: INFO: Waiting for terminating namespaces to be deleted... May 12 21:24:34.644: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:24:34.655: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:24:34.655: INFO: Container discover ready: false, restart count 0 May 12 21:24:34.655: INFO: Container init ready: false, restart count 0 May 12 21:24:34.655: INFO: Container install ready: false, restart count 0 May 12 21:24:34.655: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:24:34.655: INFO: Container nodereport ready: true, restart count 0 May 12 21:24:34.655: INFO: Container reconcile ready: true, restart count 0 May 12 21:24:34.655: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:24:34.655: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:24:34.655: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container kube-multus ready: true, restart count 1 May 12 21:24:34.655: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:24:34.655: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:24:34.655: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:24:34.655: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:24:34.655: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:24:34.655: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:24:34.655: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:24:34.655: INFO: Container collectd ready: true, restart count 0 May 12 21:24:34.655: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:24:34.655: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:24:34.655: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:24:34.655: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:24:34.655: INFO: Container node-exporter ready: true, restart count 0 May 12 21:24:34.655: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:24:34.655: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:24:34.655: INFO: Container grafana ready: true, restart count 0 May 12 21:24:34.655: INFO: Container prometheus ready: true, restart count 1 May 12 21:24:34.655: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:24:34.655: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:24:34.655: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:24:34.655: INFO: Container tas-controller ready: true, restart count 0 May 12 21:24:34.655: INFO: Container tas-extender ready: true, restart count 0 May 12 21:24:34.655: INFO: test-pod from sched-priority-4731 started at 2021-05-12 21:24:24 +0000 UTC (1 container statuses recorded) May 12 21:24:34.655: INFO: Container test-pod ready: true, restart count 0 May 12 21:24:34.655: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:24:34.662: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:24:34.662: INFO: Container nodereport ready: true, restart count 0 May 12 21:24:34.662: INFO: Container reconcile ready: true, restart count 0 May 12 21:24:34.662: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:24:34.662: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container kube-multus ready: true, restart count 1 May 12 21:24:34.662: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:24:34.662: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:24:34.662: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:24:34.662: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:24:34.662: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:24:34.662: INFO: Container collectd ready: true, restart count 0 May 12 21:24:34.662: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:24:34.662: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:24:34.662: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:24:34.662: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:24:34.662: INFO: Container node-exporter ready: true, restart count 0 May 12 21:24:34.662: INFO: rs-e2e-pts-score-6hrv5 from sched-priority-4731 started at 2021-05-12 21:24:17 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container e2e-pts-score ready: true, restart count 0 May 12 21:24:34.662: INFO: rs-e2e-pts-score-x9c2v from sched-priority-4731 started at 2021-05-12 21:24:17 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container e2e-pts-score ready: true, restart count 0 May 12 21:24:34.662: INFO: rs-e2e-pts-score-xpcbg from sched-priority-4731 started at 2021-05-12 21:24:17 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container e2e-pts-score ready: true, restart count 0 May 12 21:24:34.662: INFO: rs-e2e-pts-score-z88lv from sched-priority-4731 started at 2021-05-12 21:24:17 +0000 UTC (1 container statuses recorded) May 12 21:24:34.662: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7396909a-ab94-4ab4-8c8a-28637d90a20e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7396909a-ab94-4ab4-8c8a-28637d90a20e off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7396909a-ab94-4ab4-8c8a-28637d90a20e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:24:44.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7930" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.139 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":10,"skipped":4064,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:24:44.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 12 21:24:44.778: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 21:24:44.787: INFO: Waiting for terminating namespaces to be deleted... May 12 21:24:44.791: INFO: Logging pods the apiserver thinks is on node node1 before test May 12 21:24:44.801: INFO: cmk-init-discover-node1-2x2zk from kube-system started at 2021-05-12 16:41:25 +0000 UTC (3 container statuses recorded) May 12 21:24:44.801: INFO: Container discover ready: false, restart count 0 May 12 21:24:44.801: INFO: Container init ready: false, restart count 0 May 12 21:24:44.801: INFO: Container install ready: false, restart count 0 May 12 21:24:44.801: INFO: cmk-v4qwz from kube-system started at 2021-05-12 16:42:07 +0000 UTC (2 container statuses recorded) May 12 21:24:44.801: INFO: Container nodereport ready: true, restart count 0 May 12 21:24:44.801: INFO: Container reconcile ready: true, restart count 0 May 12 21:24:44.801: INFO: cmk-webhook-6c9d5f8578-mwvqc from kube-system started at 2021-05-12 20:46:09 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container cmk-webhook ready: true, restart count 0 May 12 21:24:44.801: INFO: kube-flannel-r7w6z from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container kube-flannel ready: true, restart count 2 May 12 21:24:44.801: INFO: kube-multus-ds-amd64-fhzwc from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container kube-multus ready: true, restart count 1 May 12 21:24:44.801: INFO: kube-proxy-r9vsx from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container kube-proxy ready: true, restart count 1 May 12 21:24:44.801: INFO: kubernetes-dashboard-86c6f9df5b-vkvbq from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container kubernetes-dashboard ready: true, restart count 1 May 12 21:24:44.801: INFO: kubernetes-metrics-scraper-678c97765c-s4sgj from kube-system started at 2021-05-12 16:33:53 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 12 21:24:44.801: INFO: nginx-proxy-node1 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:24:44.801: INFO: node-feature-discovery-worker-qtn84 from kube-system started at 2021-05-12 16:38:48 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:24:44.801: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff from kube-system started at 2021-05-12 16:39:41 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:24:44.801: INFO: collectd-5mpmz from monitoring started at 2021-05-12 16:49:38 +0000 UTC (3 container statuses recorded) May 12 21:24:44.801: INFO: Container collectd ready: true, restart count 0 May 12 21:24:44.801: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:24:44.801: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:24:44.801: INFO: node-exporter-ddxbd from monitoring started at 2021-05-12 16:43:02 +0000 UTC (2 container statuses recorded) May 12 21:24:44.801: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:24:44.801: INFO: Container node-exporter ready: true, restart count 0 May 12 21:24:44.801: INFO: prometheus-k8s-0 from monitoring started at 2021-05-12 16:43:20 +0000 UTC (5 container statuses recorded) May 12 21:24:44.801: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 12 21:24:44.801: INFO: Container grafana ready: true, restart count 0 May 12 21:24:44.801: INFO: Container prometheus ready: true, restart count 1 May 12 21:24:44.801: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 12 21:24:44.801: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 12 21:24:44.801: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5 from monitoring started at 2021-05-12 20:46:09 +0000 UTC (2 container statuses recorded) May 12 21:24:44.801: INFO: Container tas-controller ready: true, restart count 0 May 12 21:24:44.801: INFO: Container tas-extender ready: true, restart count 0 May 12 21:24:44.801: INFO: test-pod from sched-priority-4731 started at 2021-05-12 21:24:24 +0000 UTC (1 container statuses recorded) May 12 21:24:44.801: INFO: Container test-pod ready: false, restart count 0 May 12 21:24:44.801: INFO: Logging pods the apiserver thinks is on node node2 before test May 12 21:24:44.808: INFO: cmk-5b8cg from kube-system started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:24:44.808: INFO: Container nodereport ready: true, restart count 0 May 12 21:24:44.808: INFO: Container reconcile ready: true, restart count 0 May 12 21:24:44.808: INFO: kube-flannel-rqtcs from kube-system started at 2021-05-12 16:33:20 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container kube-flannel ready: true, restart count 1 May 12 21:24:44.808: INFO: kube-multus-ds-amd64-k28rf from kube-system started at 2021-05-12 16:33:28 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container kube-multus ready: true, restart count 1 May 12 21:24:44.808: INFO: kube-proxy-grtqc from kube-system started at 2021-05-12 16:32:45 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container kube-proxy ready: true, restart count 2 May 12 21:24:44.808: INFO: nginx-proxy-node2 from kube-system started at 2021-05-12 16:38:15 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container nginx-proxy ready: true, restart count 2 May 12 21:24:44.808: INFO: node-feature-discovery-worker-x5q8m from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container nfd-worker ready: true, restart count 0 May 12 21:24:44.808: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg from kube-system started at 2021-05-12 20:46:14 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container kube-sriovdp ready: true, restart count 0 May 12 21:24:44.808: INFO: collectd-w6fng from monitoring started at 2021-05-12 20:46:44 +0000 UTC (3 container statuses recorded) May 12 21:24:44.808: INFO: Container collectd ready: true, restart count 0 May 12 21:24:44.808: INFO: Container collectd-exporter ready: true, restart count 0 May 12 21:24:44.808: INFO: Container rbac-proxy ready: true, restart count 0 May 12 21:24:44.808: INFO: node-exporter-nnf86 from monitoring started at 2021-05-12 20:46:24 +0000 UTC (2 container statuses recorded) May 12 21:24:44.808: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 12 21:24:44.808: INFO: Container node-exporter ready: true, restart count 0 May 12 21:24:44.808: INFO: with-labels from sched-pred-7930 started at 2021-05-12 21:24:38 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container with-labels ready: true, restart count 0 May 12 21:24:44.808: INFO: rs-e2e-pts-score-6hrv5 from sched-priority-4731 started at 2021-05-12 21:24:17 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container e2e-pts-score ready: false, restart count 0 May 12 21:24:44.808: INFO: rs-e2e-pts-score-xpcbg from sched-priority-4731 started at 2021-05-12 21:24:17 +0000 UTC (1 container statuses recorded) May 12 21:24:44.808: INFO: Container e2e-pts-score ready: false, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-631a2510-797e-44d8-a138-3ec2ab23b80d testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ebf8a0e83e5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5366/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ebfdd631107], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.10/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ebfde001090], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec0362478ae], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.478772099s] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec03cbe124f], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec042f987df], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec079dcffbb], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167e6ec07bd903b8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167e6ec07c22363a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec0a66f8302], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167e6ec07bd903b8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167e6ec07c22363a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ebf8a0e83e5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5366/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ebfdd631107], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.10/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ebfde001090], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec0362478ae], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.478772099s] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec03cbe124f], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec042f987df], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec079dcffbb], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec0a66f8302], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.167e6ec100a09ea7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5366/still-no-tolerations to node2] STEP: Considering event: Type = [Warning], Name = [without-toleration.167e6ec1174998f8], Reason = [Failed], Message = [Error: cannot find volume "default-token-cmpxw" to mount into container "without-toleration"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167e6ec11747607d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 548.545472ms] STEP: removing the label kubernetes.io/e2e-label-key-631a2510-797e-44d8-a138-3ec2ab23b80d off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-631a2510-797e-44d8-a138-3ec2ab23b80d STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9a7b4bb2-ecd6-4e19-8e1d-8be55dd5d224=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:24:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5366" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.181 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":11,"skipped":4300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 12 21:24:51.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 12 21:24:51.963: INFO: Waiting up to 1m0s for all nodes to be ready May 12 21:25:52.014: INFO: Waiting for terminating namespaces to be deleted... May 12 21:25:52.017: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 21:25:52.033: INFO: The status of Pod cmk-init-discover-node1-2x2zk is Succeeded, skipping waiting May 12 21:25:52.033: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 21:25:52.033: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 May 12 21:25:52.033: INFO: ComputeCPUMemFraction for node: node1 May 12 21:25:52.050: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:25:52.050: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:25:52.050: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:25:52.050: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:25:52.050: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:25:52.050: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:25:52.050: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:25:52.050: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:25:52.050: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:25:52.050: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:25:52.050: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:25:52.050: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:25:52.050: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:25:52.050: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:25:52.050: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:25:52.050: INFO: Node: node1, totalRequestedCPUResource: 1037, cpuAllocatableMil: 77000, cpuFraction: 0.013467532467532467 May 12 21:25:52.050: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884632576, memFraction: 0.009921517653261606 May 12 21:25:52.050: INFO: ComputeCPUMemFraction for node: node2 May 12 21:25:52.066: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:25:52.066: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:25:52.066: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:25:52.066: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:25:52.066: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:25:52.066: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:25:52.066: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:25:52.066: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:25:52.066: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:25:52.066: INFO: Node: node2, totalRequestedCPUResource: 487, cpuAllocatableMil: 77000, cpuFraction: 0.006324675324675325 May 12 21:25:52.066: INFO: Node: node2, totalRequestedMemResource: 504944640, memAllocatableVal: 178884632576, memFraction: 0.002822739062202405 May 12 21:25:52.080: INFO: Waiting for running... May 12 21:25:57.144: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:26:02.197: INFO: ComputeCPUMemFraction for node: node1 May 12 21:26:02.214: INFO: Pod for on the node: cmk-init-discover-node1-2x2zk, Cpu: 300, Mem: 629145600 May 12 21:26:02.214: INFO: Pod for on the node: cmk-v4qwz, Cpu: 200, Mem: 419430400 May 12 21:26:02.214: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-mwvqc, Cpu: 100, Mem: 209715200 May 12 21:26:02.214: INFO: Pod for on the node: kube-flannel-r7w6z, Cpu: 150, Mem: 64000000 May 12 21:26:02.214: INFO: Pod for on the node: kube-multus-ds-amd64-fhzwc, Cpu: 100, Mem: 94371840 May 12 21:26:02.214: INFO: Pod for on the node: kube-proxy-r9vsx, Cpu: 100, Mem: 209715200 May 12 21:26:02.214: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-vkvbq, Cpu: 50, Mem: 64000000 May 12 21:26:02.214: INFO: Pod for on the node: kubernetes-metrics-scraper-678c97765c-s4sgj, Cpu: 100, Mem: 209715200 May 12 21:26:02.214: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 12 21:26:02.214: INFO: Pod for on the node: node-feature-discovery-worker-qtn84, Cpu: 100, Mem: 209715200 May 12 21:26:02.214: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-46jff, Cpu: 100, Mem: 209715200 May 12 21:26:02.214: INFO: Pod for on the node: collectd-5mpmz, Cpu: 300, Mem: 629145600 May 12 21:26:02.214: INFO: Pod for on the node: node-exporter-ddxbd, Cpu: 112, Mem: 209715200 May 12 21:26:02.214: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 12 21:26:02.214: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-nbwz5, Cpu: 200, Mem: 419430400 May 12 21:26:02.214: INFO: Pod for on the node: 03a5a995-cc61-4908-9126-c86891ad15a5-0, Cpu: 37463, Mem: 87667509248 May 12 21:26:02.214: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 12 21:26:02.214: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 12 21:26:02.214: INFO: ComputeCPUMemFraction for node: node2 May 12 21:26:02.228: INFO: Pod for on the node: cmk-5b8cg, Cpu: 200, Mem: 419430400 May 12 21:26:02.228: INFO: Pod for on the node: kube-flannel-rqtcs, Cpu: 150, Mem: 64000000 May 12 21:26:02.228: INFO: Pod for on the node: kube-multus-ds-amd64-k28rf, Cpu: 100, Mem: 94371840 May 12 21:26:02.228: INFO: Pod for on the node: kube-proxy-grtqc, Cpu: 100, Mem: 209715200 May 12 21:26:02.228: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 12 21:26:02.228: INFO: Pod for on the node: node-feature-discovery-worker-x5q8m, Cpu: 100, Mem: 209715200 May 12 21:26:02.228: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-xprdg, Cpu: 100, Mem: 209715200 May 12 21:26:02.228: INFO: Pod for on the node: collectd-w6fng, Cpu: 300, Mem: 629145600 May 12 21:26:02.229: INFO: Pod for on the node: node-exporter-nnf86, Cpu: 112, Mem: 209715200 May 12 21:26:02.229: INFO: Pod for on the node: c61abd48-b5a3-4cda-ad14-9096d839c53c-0, Cpu: 38013, Mem: 88937371648 May 12 21:26:02.229: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 12 21:26:02.229: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3377 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3377, will wait for the garbage collector to delete the pods May 12 21:26:08.420: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 5.694072ms May 12 21:26:09.121: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.461088ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 12 21:26:24.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3377" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:92.100 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":12,"skipped":4805,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 12 21:26:24.056: INFO: Running AfterSuite actions on all nodes May 12 21:26:24.056: INFO: Running AfterSuite actions on node 1 May 12 21:26:24.056: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 549.600 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 9m10.778354492s Test Suite Passed