I0515 01:28:54.375557 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0515 01:28:54.375692 22 e2e.go:129] Starting e2e run "df24b886-15ed-4d79-aec2-bf14a93251fc" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1621042133 - Will randomize all specs Will run 12 of 5484 specs May 15 01:28:54.389: INFO: >>> kubeConfig: /root/.kube/config May 15 01:28:54.394: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 15 01:28:54.423: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 01:28:54.487: INFO: The status of Pod cmk-init-discover-node2-j75ff is Succeeded, skipping waiting May 15 01:28:54.487: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 01:28:54.487: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 15 01:28:54.487: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 15 01:28:54.506: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 15 01:28:54.506: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 15 01:28:54.506: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 15 01:28:54.506: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 15 01:28:54.506: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 15 01:28:54.506: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 15 01:28:54.506: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 15 01:28:54.506: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 15 01:28:54.506: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 15 01:28:54.506: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 15 01:28:54.506: INFO: e2e test version: v1.19.10 May 15 01:28:54.507: INFO: kube-apiserver version: v1.19.8 May 15 01:28:54.507: INFO: >>> kubeConfig: /root/.kube/config May 15 01:28:54.513: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:28:54.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority May 15 01:28:54.536: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 15 01:28:54.539: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 15 01:28:54.541: INFO: Waiting up to 1m0s for all nodes to be ready May 15 01:29:54.591: INFO: Waiting for terminating namespaces to be deleted... May 15 01:29:54.595: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 01:29:54.612: INFO: The status of Pod cmk-init-discover-node2-j75ff is Succeeded, skipping waiting May 15 01:29:54.612: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 01:29:54.612: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 May 15 01:29:54.612: INFO: ComputeCPUMemFraction for node: node1 May 15 01:29:54.636: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:29:54.636: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:29:54.636: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:29:54.636: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:29:54.636: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:29:54.636: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:29:54.636: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:29:54.636: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:29:54.636: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:29:54.636: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:29:54.636: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 15 01:29:54.636: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 15 01:29:54.636: INFO: ComputeCPUMemFraction for node: node2 May 15 01:29:54.651: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:29:54.651: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:29:54.651: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:29:54.651: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:29:54.651: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:29:54.651: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:29:54.651: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:29:54.651: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:29:54.651: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:29:54.651: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:29:54.651: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:29:54.651: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:29:54.651: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:29:54.651: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 15 01:29:54.651: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 15 01:29:54.667: INFO: Waiting for running... May 15 01:29:59.729: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:30:04.781: INFO: ComputeCPUMemFraction for node: node1 May 15 01:30:04.795: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:30:04.795: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:30:04.795: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:30:04.795: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:30:04.795: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:30:04.795: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:30:04.795: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:30:04.795: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:30:04.795: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:30:04.795: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:30:04.795: INFO: Pod for on the node: a434b046-9bb4-4b73-80a6-ecc2628d0d46-0, Cpu: 37513, Mem: 87731509248 May 15 01:30:04.796: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 15 01:30:04.796: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:30:04.796: INFO: ComputeCPUMemFraction for node: node2 May 15 01:30:04.812: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:30:04.812: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:30:04.813: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:30:04.813: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:30:04.813: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:30:04.813: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:30:04.813: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:30:04.813: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:30:04.813: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:30:04.813: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:30:04.813: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:30:04.813: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:30:04.813: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:30:04.813: INFO: Pod for on the node: c5803112-5940-4a72-ab82-642ea8d992f0-0, Cpu: 37963, Mem: 88873371648 May 15 01:30:04.813: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 15 01:30:04.813: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-5091 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-5091, will wait for the garbage collector to delete the pods May 15 01:30:10.999: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.777503ms May 15 01:30:11.699: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.316371ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:30:23.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5091" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:89.112 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":1,"skipped":178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:30:23.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:30:23.656: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:30:23.664: INFO: Waiting for terminating namespaces to be deleted... May 15 01:30:23.666: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:30:23.675: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:30:23.675: INFO: Container nodereport ready: true, restart count 0 May 15 01:30:23.675: INFO: Container reconcile ready: true, restart count 0 May 15 01:30:23.675: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:30:23.675: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:30:23.675: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:30:23.675: INFO: Container kube-multus ready: true, restart count 1 May 15 01:30:23.675: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:30:23.675: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:30:23.675: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:30:23.675: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:30:23.675: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:30:23.675: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:30:23.675: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:30:23.675: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:30:23.675: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:30:23.675: INFO: Container collectd ready: true, restart count 0 May 15 01:30:23.675: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:30:23.675: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:30:23.675: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:30:23.675: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:30:23.675: INFO: Container node-exporter ready: true, restart count 0 May 15 01:30:23.675: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:30:23.675: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:30:23.675: INFO: Container grafana ready: true, restart count 0 May 15 01:30:23.675: INFO: Container prometheus ready: true, restart count 26 May 15 01:30:23.675: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:30:23.675: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:30:23.675: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:30:23.684: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:30:23.684: INFO: Container nodereport ready: true, restart count 0 May 15 01:30:23.684: INFO: Container reconcile ready: true, restart count 0 May 15 01:30:23.684: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:30:23.684: INFO: Container discover ready: false, restart count 0 May 15 01:30:23.684: INFO: Container init ready: false, restart count 0 May 15 01:30:23.684: INFO: Container install ready: false, restart count 0 May 15 01:30:23.684: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:30:23.684: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:30:23.684: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container kube-multus ready: true, restart count 1 May 15 01:30:23.684: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:30:23.684: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:30:23.684: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:30:23.684: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:30:23.684: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:30:23.684: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:30:23.684: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:30:23.684: INFO: Container collectd ready: true, restart count 0 May 15 01:30:23.684: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:30:23.684: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:30:23.684: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:30:23.684: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:30:23.684: INFO: Container node-exporter ready: true, restart count 0 May 15 01:30:23.684: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:30:23.684: INFO: Container tas-controller ready: true, restart count 0 May 15 01:30:23.684: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-99aefd03-d4f1-46e4-9cbf-35a1d83b43cf=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-b6c80c21-219a-464c-ba09-c0fd4e7cd064 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-b6c80c21-219a-464c-ba09-c0fd4e7cd064 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-b6c80c21-219a-464c-ba09-c0fd4e7cd064 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-99aefd03-d4f1-46e4-9cbf-35a1d83b43cf=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:30:33.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5054" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.163 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":2,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:30:33.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 15 01:30:33.825: INFO: Waiting up to 1m0s for all nodes to be ready May 15 01:31:33.874: INFO: Waiting for terminating namespaces to be deleted... May 15 01:31:33.876: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 01:31:33.892: INFO: The status of Pod cmk-init-discover-node2-j75ff is Succeeded, skipping waiting May 15 01:31:33.892: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 01:31:33.892: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 May 15 01:31:41.967: INFO: ComputeCPUMemFraction for node: node1 May 15 01:31:41.982: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:31:41.983: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:31:41.983: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:31:41.983: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:31:41.983: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:31:41.983: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:31:41.983: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:31:41.983: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:31:41.983: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:31:41.983: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:31:41.983: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 15 01:31:41.983: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 15 01:31:41.983: INFO: ComputeCPUMemFraction for node: node2 May 15 01:31:41.998: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:31:41.998: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:31:41.998: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:31:41.998: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:31:41.998: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:31:41.998: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:31:41.998: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:31:41.998: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:31:41.998: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:31:41.998: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:31:41.998: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:31:41.998: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:31:41.998: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:31:41.998: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 15 01:31:41.998: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 15 01:31:42.008: INFO: Waiting for running... May 15 01:31:47.072: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:31:52.129: INFO: ComputeCPUMemFraction for node: node1 May 15 01:31:52.144: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:31:52.144: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:31:52.144: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:31:52.144: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:31:52.144: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:31:52.144: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:31:52.144: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:31:52.144: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:31:52.144: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:31:52.144: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:31:52.144: INFO: Pod for on the node: 7080c97e-4d34-46f3-adaa-1d5fc96349af-0, Cpu: 37513, Mem: 87731509248 May 15 01:31:52.144: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 15 01:31:52.144: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:31:52.144: INFO: ComputeCPUMemFraction for node: node2 May 15 01:31:52.160: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:31:52.160: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:31:52.160: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:31:52.160: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:31:52.160: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:31:52.160: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:31:52.160: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:31:52.160: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:31:52.160: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:31:52.160: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:31:52.160: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:31:52.160: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:31:52.160: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:31:52.160: INFO: Pod for on the node: de1ac645-73f2-4130-83b0-35dd7b35d113-0, Cpu: 37963, Mem: 88873371648 May 15 01:31:52.160: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 15 01:31:52.160: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Run a ReplicaSet with 4 replicas on node "node1" STEP: Verifying if the test-pod lands on node "node2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:32:10.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3823" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:96.440 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":3,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:32:10.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 15 01:32:10.270: INFO: Waiting up to 1m0s for all nodes to be ready May 15 01:33:10.326: INFO: Waiting for terminating namespaces to be deleted... May 15 01:33:10.329: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 01:33:10.346: INFO: The status of Pod cmk-init-discover-node2-j75ff is Succeeded, skipping waiting May 15 01:33:10.346: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 01:33:10.346: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 15 01:33:14.371: INFO: ComputeCPUMemFraction for node: node1 May 15 01:33:14.387: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:33:14.387: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:33:14.387: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:33:14.387: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:33:14.387: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:33:14.387: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:33:14.387: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:33:14.387: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:33:14.387: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:33:14.387: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:33:14.387: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 15 01:33:14.387: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 15 01:33:14.387: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 15 01:33:14.387: INFO: ComputeCPUMemFraction for node: node2 May 15 01:33:14.404: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:33:14.404: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:33:14.404: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:33:14.404: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:33:14.404: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:33:14.404: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:33:14.404: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:33:14.404: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:33:14.404: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:33:14.404: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:33:14.404: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:33:14.404: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:33:14.404: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:33:14.404: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 15 01:33:14.404: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 15 01:33:14.415: INFO: Waiting for running... May 15 01:33:19.479: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:33:24.530: INFO: ComputeCPUMemFraction for node: node1 May 15 01:33:24.548: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:33:24.548: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:33:24.548: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:33:24.548: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:33:24.548: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:33:24.548: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:33:24.548: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:33:24.548: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:33:24.548: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:33:24.548: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:33:24.548: INFO: Pod for on the node: ff12e4cb-78f7-46ae-9036-2e9e2e56968b-0, Cpu: 45213, Mem: 105619972505 May 15 01:33:24.548: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 15 01:33:24.548: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 15 01:33:24.548: INFO: Node: node1, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:33:24.548: INFO: ComputeCPUMemFraction for node: node2 May 15 01:33:24.567: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:33:24.567: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:33:24.567: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:33:24.567: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:33:24.567: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:33:24.567: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:33:24.567: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:33:24.567: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:33:24.567: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:33:24.567: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:33:24.567: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:33:24.567: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:33:24.567: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:33:24.567: INFO: Pod for on the node: 49e4b56d-bdb0-4f66-90cb-7c3d987ea18c-0, Cpu: 45663, Mem: 106761834905 May 15 01:33:24.567: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 15 01:33:24.567: INFO: Node: node2, totalRequestedMemResource: 107330779545, memAllocatableVal: 178884632576, memFraction: 0.5999999999966459 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:33:40.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1409" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:90.367 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":4,"skipped":1336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:33:40.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:33:40.650: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:33:40.658: INFO: Waiting for terminating namespaces to be deleted... May 15 01:33:40.660: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:33:40.670: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:33:40.670: INFO: Container nodereport ready: true, restart count 0 May 15 01:33:40.670: INFO: Container reconcile ready: true, restart count 0 May 15 01:33:40.670: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:33:40.670: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container kube-multus ready: true, restart count 1 May 15 01:33:40.670: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:33:40.670: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:33:40.670: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:33:40.670: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:33:40.670: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:33:40.670: INFO: Container collectd ready: true, restart count 0 May 15 01:33:40.670: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:33:40.670: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:33:40.670: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:33:40.670: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:33:40.670: INFO: Container node-exporter ready: true, restart count 0 May 15 01:33:40.670: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:33:40.670: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:33:40.670: INFO: Container grafana ready: true, restart count 0 May 15 01:33:40.670: INFO: Container prometheus ready: true, restart count 26 May 15 01:33:40.670: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:33:40.670: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:33:40.670: INFO: pod-with-label-security-s1 from sched-priority-1409 started at 2021-05-15 01:33:10 +0000 UTC (1 container statuses recorded) May 15 01:33:40.670: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 May 15 01:33:40.670: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:33:40.685: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:33:40.685: INFO: Container nodereport ready: true, restart count 0 May 15 01:33:40.685: INFO: Container reconcile ready: true, restart count 0 May 15 01:33:40.685: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:33:40.685: INFO: Container discover ready: false, restart count 0 May 15 01:33:40.685: INFO: Container init ready: false, restart count 0 May 15 01:33:40.685: INFO: Container install ready: false, restart count 0 May 15 01:33:40.685: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:33:40.685: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:33:40.685: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:33:40.685: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:33:40.685: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:33:40.685: INFO: Container kube-multus ready: true, restart count 1 May 15 01:33:40.686: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:33:40.686: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:33:40.686: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:33:40.686: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:33:40.686: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:33:40.686: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:33:40.686: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:33:40.686: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:33:40.686: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:33:40.686: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:33:40.686: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:33:40.686: INFO: Container collectd ready: true, restart count 0 May 15 01:33:40.686: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:33:40.686: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:33:40.686: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:33:40.686: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:33:40.686: INFO: Container node-exporter ready: true, restart count 0 May 15 01:33:40.686: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:33:40.686: INFO: Container tas-controller ready: true, restart count 0 May 15 01:33:40.686: INFO: Container tas-extender ready: true, restart count 0 May 15 01:33:40.686: INFO: pod-with-pod-antiaffinity from sched-priority-1409 started at 2021-05-15 01:33:24 +0000 UTC (1 container statuses recorded) May 15 01:33:40.686: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f197f258a7a35], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f197f25de1ae8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f197fb9635429], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7576/filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f19800cf4a19f], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.94/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f19800db1257f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f19802ad5151d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 488.88722ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f19803220caf2], Reason = [Created], Message = [Created container filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599] STEP: Considering event: Type = [Normal], Name = [filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599.167f1980379a8a89], Reason = [Started], Message = [Started container filler-pod-fbbaf5f5-4866-486e-8964-953a9ca21599] STEP: Considering event: Type = [Normal], Name = [without-label.167f197e34dae7ad], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7576/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.167f197e8915700c], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.93/24]] STEP: Considering event: Type = [Normal], Name = [without-label.167f197e89e20987], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-label.167f197ea84f481d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 510.462989ms] STEP: Considering event: Type = [Normal], Name = [without-label.167f197eaf26ad48], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.167f197eb50b9295], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.167f197f24c51c51], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.167f197f2875d51a], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-7kdll" : object "sched-pred-7576"/"default-token-7kdll" not registered] STEP: Considering event: Type = [Warning], Name = [additional-pod0e15e9f4-8534-43e1-aac3-cfa743ea6814.167f19808c9bb2e1], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [additional-pod0e15e9f4-8534-43e1-aac3-cfa743ea6814.167f19808cf60338], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:33:51.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7576" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.179 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":5,"skipped":2498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:33:51.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 May 15 01:33:51.842: INFO: Waiting up to 1m0s for all nodes to be ready May 15 01:34:51.902: INFO: Waiting for terminating namespaces to be deleted... May 15 01:34:51.905: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 15 01:34:51.922: INFO: The status of Pod cmk-init-discover-node2-j75ff is Succeeded, skipping waiting May 15 01:34:51.923: INFO: 40 / 41 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 15 01:34:51.923: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 May 15 01:34:51.923: INFO: ComputeCPUMemFraction for node: node1 May 15 01:34:51.939: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:34:51.939: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:34:51.939: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:34:51.939: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:34:51.939: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:34:51.939: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:34:51.939: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:34:51.939: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:34:51.939: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:34:51.939: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:34:51.939: INFO: Node: node1, totalRequestedCPUResource: 987, cpuAllocatableMil: 77000, cpuFraction: 0.012818181818181819 May 15 01:34:51.939: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884632576, memFraction: 0.009563745165606416 May 15 01:34:51.939: INFO: ComputeCPUMemFraction for node: node2 May 15 01:34:51.954: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:34:51.954: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:34:51.954: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:34:51.954: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:34:51.954: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:34:51.954: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:34:51.954: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:34:51.954: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:34:51.954: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:34:51.954: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:34:51.954: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:34:51.954: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:34:51.954: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:34:51.954: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 May 15 01:34:51.954: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884632576, memFraction: 0.003180511549857594 May 15 01:34:51.968: INFO: Waiting for running... May 15 01:34:57.035: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:35:02.088: INFO: ComputeCPUMemFraction for node: node1 May 15 01:35:02.104: INFO: Pod for on the node: cmk-4s6dm, Cpu: 200, Mem: 419430400 May 15 01:35:02.104: INFO: Pod for on the node: kube-flannel-hj8sj, Cpu: 150, Mem: 64000000 May 15 01:35:02.104: INFO: Pod for on the node: kube-multus-ds-amd64-jhf4c, Cpu: 100, Mem: 94371840 May 15 01:35:02.104: INFO: Pod for on the node: kube-proxy-l7697, Cpu: 100, Mem: 209715200 May 15 01:35:02.104: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 15 01:35:02.104: INFO: Pod for on the node: node-feature-discovery-worker-bw8zg, Cpu: 100, Mem: 209715200 May 15 01:35:02.104: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc, Cpu: 100, Mem: 209715200 May 15 01:35:02.104: INFO: Pod for on the node: collectd-mrzps, Cpu: 300, Mem: 629145600 May 15 01:35:02.104: INFO: Pod for on the node: node-exporter-flvqz, Cpu: 112, Mem: 209715200 May 15 01:35:02.104: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 500, Mem: 1205862400 May 15 01:35:02.104: INFO: Pod for on the node: d9d7b714-6a08-49b4-802f-b643473287b8-0, Cpu: 37513, Mem: 87731509248 May 15 01:35:02.104: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 15 01:35:02.104: INFO: Node: node1, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 15 01:35:02.104: INFO: ComputeCPUMemFraction for node: node2 May 15 01:35:02.118: INFO: Pod for on the node: cmk-d2qwf, Cpu: 200, Mem: 419430400 May 15 01:35:02.118: INFO: Pod for on the node: cmk-init-discover-node2-j75ff, Cpu: 300, Mem: 629145600 May 15 01:35:02.118: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-pjgxh, Cpu: 100, Mem: 209715200 May 15 01:35:02.118: INFO: Pod for on the node: kube-flannel-rqcwp, Cpu: 150, Mem: 64000000 May 15 01:35:02.118: INFO: Pod for on the node: kube-multus-ds-amd64-n7cb2, Cpu: 100, Mem: 94371840 May 15 01:35:02.118: INFO: Pod for on the node: kube-proxy-t524z, Cpu: 100, Mem: 209715200 May 15 01:35:02.118: INFO: Pod for on the node: kubernetes-dashboard-86c6f9df5b-ndntg, Cpu: 50, Mem: 64000000 May 15 01:35:02.118: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 15 01:35:02.118: INFO: Pod for on the node: node-feature-discovery-worker-76m6b, Cpu: 100, Mem: 209715200 May 15 01:35:02.118: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw, Cpu: 100, Mem: 209715200 May 15 01:35:02.118: INFO: Pod for on the node: collectd-xzrgs, Cpu: 300, Mem: 629145600 May 15 01:35:02.118: INFO: Pod for on the node: node-exporter-rnd5f, Cpu: 112, Mem: 209715200 May 15 01:35:02.118: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq, Cpu: 200, Mem: 419430400 May 15 01:35:02.118: INFO: Pod for on the node: 129e1726-8d69-4ca1-8ba5-c930bd9f258c-0, Cpu: 37963, Mem: 88873371648 May 15 01:35:02.118: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 15 01:35:02.119: INFO: Node: node2, totalRequestedMemResource: 89442316288, memAllocatableVal: 178884632576, memFraction: 0.5 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7e64f35b-2d80-4cc4-9a25-23f2e1e67765=testing-taint-value-3d27659b-31d8-4c93-aade-1138a2f4fecb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1bae4649-adb6-4ca8-8e71-39d186971b70=testing-taint-value-412cfc56-49e2-40de-b4bf-1dd5f0ea40e0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-a6aa72bb-ede2-4377-97d3-db04a64cf240=testing-taint-value-8308d2f2-b898-4b14-b719-ae0fab6f00ad:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7c2ddb7d-2e8f-4a41-8232-4e5fd1bcdfed=testing-taint-value-94527656-6468-4789-9845-92de7a1d7e6f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-fde1610b-bdf5-462b-8c58-f6cfb37b13f1=testing-taint-value-22202f2f-d792-4699-9dd6-2f0de186df07:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-ae261997-545e-4a18-80d0-7e884d1f9808=testing-taint-value-0aa4d1d9-e4ec-4d52-ac87-ac72ef9af43d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9f4edfd6-e178-41fa-a4e1-c86127551e44=testing-taint-value-e98988da-ba3d-416a-96ce-4ce55278d15a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8e33c60d-be8e-4d58-8bd9-d1f5ef8d2416=testing-taint-value-3b5e882b-ee2b-40c7-b7a1-ae0b91ac9445:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-43024e83-fbce-4989-8b14-956a28f3c685=testing-taint-value-fcba2af3-8716-4d64-a7f0-911a466d620b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7d5c8364-718c-4a22-a325-e4dde2fe7be4=testing-taint-value-4abff6eb-64b7-4c83-acc9-b713a73f1457:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0ef23746-49e7-4a70-be07-d9605d7854bf=testing-taint-value-22daa94e-8b82-43f7-b301-c2e4c381a12a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c1126d54-7073-457c-af1c-35a1155bc7d6=testing-taint-value-dacad127-59bf-485e-b1de-67e13e6c71c4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-bd1eac1a-f3a5-41b7-b141-ad488f4a8d1f=testing-taint-value-a6c1e1a3-d8c5-48eb-94b1-1fd17ce14318:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9cb1c440-2502-4db6-bdf9-34f4389dff88=testing-taint-value-c7dc14d6-a9f9-42c5-a4e3-cc3252f5b96d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f2cdc4ea-b06f-4e3d-9254-7d025d0418e5=testing-taint-value-272343c3-2a38-4b88-ad3e-5c0f6c3729dd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-95857f3c-ff50-4885-bee5-add66690491a=testing-taint-value-8ae55710-2404-4ca6-b613-ef520046e5b9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7504e4bf-3357-4ad7-afd9-6e4dd9f63b53=testing-taint-value-07c4cb98-2243-46bf-a3e4-8f870d2b7704:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-c4f51c47-0dc2-4a02-9cd0-26e3ddbe5244=testing-taint-value-8816b2a0-1df5-4c3d-b648-2f046f1486d0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9dc918fa-6a01-419f-a3fb-8236d919caca=testing-taint-value-435dc664-f2c7-46a9-b44d-a94f77a50f34:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-14d4e267-4afc-48f7-9b1b-1c3a55d54146=testing-taint-value-1726c1d7-6fbc-4f38-821d-b5f314e42e46:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-14d4e267-4afc-48f7-9b1b-1c3a55d54146=testing-taint-value-1726c1d7-6fbc-4f38-821d-b5f314e42e46:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9dc918fa-6a01-419f-a3fb-8236d919caca=testing-taint-value-435dc664-f2c7-46a9-b44d-a94f77a50f34:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c4f51c47-0dc2-4a02-9cd0-26e3ddbe5244=testing-taint-value-8816b2a0-1df5-4c3d-b648-2f046f1486d0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7504e4bf-3357-4ad7-afd9-6e4dd9f63b53=testing-taint-value-07c4cb98-2243-46bf-a3e4-8f870d2b7704:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-95857f3c-ff50-4885-bee5-add66690491a=testing-taint-value-8ae55710-2404-4ca6-b613-ef520046e5b9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f2cdc4ea-b06f-4e3d-9254-7d025d0418e5=testing-taint-value-272343c3-2a38-4b88-ad3e-5c0f6c3729dd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9cb1c440-2502-4db6-bdf9-34f4389dff88=testing-taint-value-c7dc14d6-a9f9-42c5-a4e3-cc3252f5b96d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-bd1eac1a-f3a5-41b7-b141-ad488f4a8d1f=testing-taint-value-a6c1e1a3-d8c5-48eb-94b1-1fd17ce14318:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-c1126d54-7073-457c-af1c-35a1155bc7d6=testing-taint-value-dacad127-59bf-485e-b1de-67e13e6c71c4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0ef23746-49e7-4a70-be07-d9605d7854bf=testing-taint-value-22daa94e-8b82-43f7-b301-c2e4c381a12a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7d5c8364-718c-4a22-a325-e4dde2fe7be4=testing-taint-value-4abff6eb-64b7-4c83-acc9-b713a73f1457:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-43024e83-fbce-4989-8b14-956a28f3c685=testing-taint-value-fcba2af3-8716-4d64-a7f0-911a466d620b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8e33c60d-be8e-4d58-8bd9-d1f5ef8d2416=testing-taint-value-3b5e882b-ee2b-40c7-b7a1-ae0b91ac9445:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9f4edfd6-e178-41fa-a4e1-c86127551e44=testing-taint-value-e98988da-ba3d-416a-96ce-4ce55278d15a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-ae261997-545e-4a18-80d0-7e884d1f9808=testing-taint-value-0aa4d1d9-e4ec-4d52-ac87-ac72ef9af43d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-fde1610b-bdf5-462b-8c58-f6cfb37b13f1=testing-taint-value-22202f2f-d792-4699-9dd6-2f0de186df07:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7c2ddb7d-2e8f-4a41-8232-4e5fd1bcdfed=testing-taint-value-94527656-6468-4789-9845-92de7a1d7e6f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-a6aa72bb-ede2-4377-97d3-db04a64cf240=testing-taint-value-8308d2f2-b898-4b14-b719-ae0fab6f00ad:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1bae4649-adb6-4ca8-8e71-39d186971b70=testing-taint-value-412cfc56-49e2-40de-b4bf-1dd5f0ea40e0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7e64f35b-2d80-4cc4-9a25-23f2e1e67765=testing-taint-value-3d27659b-31d8-4c93-aade-1138a2f4fecb:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:35:21.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-962" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:89.692 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":6,"skipped":3129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:35:21.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:35:21.539: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:35:21.547: INFO: Waiting for terminating namespaces to be deleted... May 15 01:35:21.549: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:35:21.558: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:35:21.558: INFO: Container nodereport ready: true, restart count 0 May 15 01:35:21.558: INFO: Container reconcile ready: true, restart count 0 May 15 01:35:21.558: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:35:21.558: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container kube-multus ready: true, restart count 1 May 15 01:35:21.558: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:35:21.558: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:35:21.558: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:35:21.558: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:35:21.558: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:35:21.558: INFO: Container collectd ready: true, restart count 0 May 15 01:35:21.558: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:35:21.558: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:35:21.558: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:35:21.558: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:35:21.558: INFO: Container node-exporter ready: true, restart count 0 May 15 01:35:21.558: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:35:21.558: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:35:21.558: INFO: Container grafana ready: true, restart count 0 May 15 01:35:21.558: INFO: Container prometheus ready: true, restart count 26 May 15 01:35:21.558: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:35:21.558: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:35:21.558: INFO: with-tolerations from sched-priority-962 started at 2021-05-15 01:35:02 +0000 UTC (1 container statuses recorded) May 15 01:35:21.558: INFO: Container with-tolerations ready: true, restart count 0 May 15 01:35:21.558: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:35:21.567: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:35:21.567: INFO: Container nodereport ready: true, restart count 0 May 15 01:35:21.567: INFO: Container reconcile ready: true, restart count 0 May 15 01:35:21.567: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:35:21.567: INFO: Container discover ready: false, restart count 0 May 15 01:35:21.567: INFO: Container init ready: false, restart count 0 May 15 01:35:21.567: INFO: Container install ready: false, restart count 0 May 15 01:35:21.567: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:35:21.567: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:35:21.567: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container kube-multus ready: true, restart count 1 May 15 01:35:21.567: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:35:21.567: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:35:21.567: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:35:21.567: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:35:21.567: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:35:21.567: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:35:21.567: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:35:21.567: INFO: Container collectd ready: true, restart count 0 May 15 01:35:21.567: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:35:21.567: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:35:21.567: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:35:21.567: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:35:21.567: INFO: Container node-exporter ready: true, restart count 0 May 15 01:35:21.567: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:35:21.567: INFO: Container tas-controller ready: true, restart count 0 May 15 01:35:21.567: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:35:35.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6964" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":7,"skipped":3492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:35:35.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:35:35.715: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:35:35.723: INFO: Waiting for terminating namespaces to be deleted... May 15 01:35:35.725: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:35:35.739: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:35:35.739: INFO: Container nodereport ready: true, restart count 0 May 15 01:35:35.739: INFO: Container reconcile ready: true, restart count 0 May 15 01:35:35.739: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:35:35.739: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:35:35.739: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:35:35.739: INFO: Container kube-multus ready: true, restart count 1 May 15 01:35:35.739: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:35:35.739: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:35:35.739: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:35:35.739: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:35:35.739: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:35:35.739: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:35:35.739: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:35:35.739: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:35:35.739: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:35:35.739: INFO: Container collectd ready: true, restart count 0 May 15 01:35:35.739: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:35:35.739: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:35:35.739: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:35:35.739: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:35:35.739: INFO: Container node-exporter ready: true, restart count 0 May 15 01:35:35.739: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:35:35.739: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:35:35.740: INFO: Container grafana ready: true, restart count 0 May 15 01:35:35.740: INFO: Container prometheus ready: true, restart count 26 May 15 01:35:35.740: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:35:35.740: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:35:35.740: INFO: rs-e2e-pts-filter-hk8nb from sched-pred-6964 started at 2021-05-15 01:35:29 +0000 UTC (1 container statuses recorded) May 15 01:35:35.740: INFO: Container e2e-pts-filter ready: true, restart count 0 May 15 01:35:35.740: INFO: rs-e2e-pts-filter-xcmdz from sched-pred-6964 started at 2021-05-15 01:35:29 +0000 UTC (1 container statuses recorded) May 15 01:35:35.740: INFO: Container e2e-pts-filter ready: true, restart count 0 May 15 01:35:35.740: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:35:35.748: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:35:35.748: INFO: Container nodereport ready: true, restart count 0 May 15 01:35:35.748: INFO: Container reconcile ready: true, restart count 0 May 15 01:35:35.748: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:35:35.748: INFO: Container discover ready: false, restart count 0 May 15 01:35:35.748: INFO: Container init ready: false, restart count 0 May 15 01:35:35.748: INFO: Container install ready: false, restart count 0 May 15 01:35:35.748: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:35:35.748: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:35:35.748: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container kube-multus ready: true, restart count 1 May 15 01:35:35.748: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:35:35.748: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:35:35.748: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:35:35.748: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:35:35.748: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:35:35.748: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:35:35.749: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:35:35.749: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:35:35.749: INFO: Container collectd ready: true, restart count 0 May 15 01:35:35.749: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:35:35.749: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:35:35.749: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:35:35.749: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:35:35.749: INFO: Container node-exporter ready: true, restart count 0 May 15 01:35:35.749: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:35:35.749: INFO: Container tas-controller ready: true, restart count 0 May 15 01:35:35.749: INFO: Container tas-extender ready: true, restart count 0 May 15 01:35:35.749: INFO: rs-e2e-pts-filter-cx94h from sched-pred-6964 started at 2021-05-15 01:35:29 +0000 UTC (1 container statuses recorded) May 15 01:35:35.749: INFO: Container e2e-pts-filter ready: true, restart count 0 May 15 01:35:35.749: INFO: rs-e2e-pts-filter-xbs46 from sched-pred-6964 started at 2021-05-15 01:35:29 +0000 UTC (1 container statuses recorded) May 15 01:35:35.749: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 15 01:35:35.785: INFO: Pod cmk-4s6dm requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod cmk-d2qwf requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod cmk-webhook-6c9d5f8578-pjgxh requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod kube-flannel-hj8sj requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod kube-flannel-rqcwp requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod kube-multus-ds-amd64-jhf4c requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod kube-multus-ds-amd64-n7cb2 requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod kube-proxy-l7697 requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod kube-proxy-t524z requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod kubernetes-dashboard-86c6f9df5b-ndntg requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod node-feature-discovery-worker-76m6b requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod node-feature-discovery-worker-bw8zg requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod collectd-mrzps requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod collectd-xzrgs requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod node-exporter-flvqz requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod node-exporter-rnd5f requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod rs-e2e-pts-filter-cx94h requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod rs-e2e-pts-filter-hk8nb requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Pod rs-e2e-pts-filter-xbs46 requesting local ephemeral resource =0 on Node node2 May 15 01:35:35.785: INFO: Pod rs-e2e-pts-filter-xcmdz requesting local ephemeral resource =0 on Node node1 May 15 01:35:35.785: INFO: Using pod capacity: 40542413347 May 15 01:35:35.785: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 May 15 01:35:35.785: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 15 01:35:35.980: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.167f199900222de5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-0 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167f199a8f55b840], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.102/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167f199ac7596b31], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167f199af50e6108], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 766.825177ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167f199b0cfcb747], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.167f199b45a918aa], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167f199900afe9b1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167f1999d9374b27], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.100/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167f1999d9fb6612], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167f1999f741514b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 491.111397ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167f199a2a7f6b46], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.167f199aed9e9c33], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167f199905cbe223], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167f1999e84cb1c0], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.101/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167f1999e9437b31], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167f199a16ad3592], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 761.893282ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167f199a2da01705], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.167f199aed9d0af8], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167f199906688eab], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167f199a9d0e26cf], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.189/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167f199abaea50d2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167f199b00f49148], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.175067128s] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167f199b2a1c6912], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.167f199b37bac0ef], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167f199906edcfdb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167f199ad7a6ac4b], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.104/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167f199afcd50ce2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167f199b2b043077], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 774.833686ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167f199b46bab906], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.167f199b610721f6], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167f19990793e5d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167f199b155d59a1], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.194/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167f199b161708c4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167f199b8528e211], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.863428336s] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167f199b8bdebe2c], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.167f199b91f7f243], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167f199908237c5d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167f199b1596ac3c], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.195/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167f199b162a2f16], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167f199ba195f25c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.339087443s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167f199ba89a6cd6], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.167f199baedde9af], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167f199908b4f076], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167f199b3fad950d], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.105/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167f199b40ed46c6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167f199b81f88c7b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.091250721s] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167f199b8940024f], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.167f199b8f366f93], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167f199909416a7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167f199b3fc359ce], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.108/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167f199b40ecd48d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167f199b66eae8a2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 637.396695ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167f199b74a4379f], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.167f199b7a719627], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167f199909db8cf3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-17 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167f199b146b4358], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.193/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167f199b15325f79], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167f199b6989184e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.414962687s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167f199b706ad43b], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.167f199b76bcecf1], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167f19990a7509b6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-18 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167f199a7bbbf3cf], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.188/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167f199a96183ac1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167f199ac36af6fb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 760.387315ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167f199adf592309], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.167f199b2c1a900e], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167f19990afa9ecb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167f199b3fd79b40], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.107/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167f199b40f0ad07], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167f199ba1497726], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.616424885s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167f199ba992e69d], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.167f199bb00ede50], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167f1999013fc3c2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167f199aab70b7a2], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.192/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167f199abdda9112], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167f199b2e50cc88], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.886786366s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167f199b3a2ac1be], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.167f199b41e46c13], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167f199901d0b8ff], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167f199a06c3a0bd], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.187/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167f199a2100f4c4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167f199a67039bce], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.174570568s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167f199a9890f2b3], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.167f199ae7340988], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167f199902630412], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167f199b3fbead53], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.103/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167f199b40fb6833], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167f199bbf78ddbb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.122139826s] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167f199bc90933c6], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.167f199bcec431aa], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167f199902fa87cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-5 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167f199b41c9e9a8], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.109/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167f199b4283cbe3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167f199bdce98427], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.590351539s] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167f199be40b8483], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.167f199be9bca507], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167f19990386203f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167f199aac1f9c4c], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.190/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167f199ac013caa9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167f199b4a8b3ffe], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.323077744s] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167f199b50e6d404], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.167f199b56f0473c], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167f19990413dcd4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167f199a039d695b], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.186/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167f199a1fb0bcda], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167f199a488e963f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 685.616359ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167f199aa674cb2b], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.167f199ae5e0e3ed], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167f199904ad1335], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167f199a9e108018], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.191/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167f199abae5f3e0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167f199adf609c7c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 612.009576ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167f199b03961c4f], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.167f199b2d0b1636], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167f199905308f3e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9883/overcommit-9 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167f199afd19f373], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.106/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167f199b2afdb6ec], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167f199b4a3f6751], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 524.384969ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167f199b5d42efdc], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.167f199b694af8cc], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.167f199c8d6d3f24], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [additional-pod.167f199c8db87afe], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:35:52.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9883" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.389 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":8,"skipped":3929,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:35:52.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 May 15 01:35:52.115: INFO: Waiting up to 1m0s for all nodes to be ready May 15 01:36:52.167: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:37:34.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9569" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:102.369 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":9,"skipped":3932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:37:34.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:37:34.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:37:34.484: INFO: Waiting for terminating namespaces to be deleted... May 15 01:37:34.486: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:37:34.496: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:37:34.496: INFO: Container nodereport ready: true, restart count 0 May 15 01:37:34.496: INFO: Container reconcile ready: true, restart count 0 May 15 01:37:34.496: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:37:34.496: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container kube-multus ready: true, restart count 1 May 15 01:37:34.496: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:37:34.496: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:37:34.496: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:37:34.496: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:37:34.496: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:37:34.496: INFO: Container collectd ready: true, restart count 0 May 15 01:37:34.496: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:37:34.496: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:37:34.496: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:37:34.496: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:37:34.496: INFO: Container node-exporter ready: true, restart count 0 May 15 01:37:34.496: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:37:34.496: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:37:34.496: INFO: Container grafana ready: true, restart count 0 May 15 01:37:34.496: INFO: Container prometheus ready: true, restart count 26 May 15 01:37:34.496: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:37:34.496: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:37:34.496: INFO: low-1 from sched-preemption-9569 started at 2021-05-15 01:37:10 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container low-1 ready: true, restart count 0 May 15 01:37:34.496: INFO: medium from sched-preemption-9569 started at 2021-05-15 01:37:29 +0000 UTC (1 container statuses recorded) May 15 01:37:34.496: INFO: Container medium ready: true, restart count 0 May 15 01:37:34.496: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:37:34.520: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:37:34.520: INFO: Container nodereport ready: true, restart count 0 May 15 01:37:34.520: INFO: Container reconcile ready: true, restart count 0 May 15 01:37:34.520: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:37:34.520: INFO: Container discover ready: false, restart count 0 May 15 01:37:34.520: INFO: Container init ready: false, restart count 0 May 15 01:37:34.520: INFO: Container install ready: false, restart count 0 May 15 01:37:34.520: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:37:34.520: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:37:34.521: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:37:34.521: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container kube-multus ready: true, restart count 1 May 15 01:37:34.521: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:37:34.521: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:37:34.521: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:37:34.521: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:37:34.521: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:37:34.521: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:37:34.521: INFO: Container collectd ready: true, restart count 0 May 15 01:37:34.521: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:37:34.521: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:37:34.521: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:37:34.521: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:37:34.521: INFO: Container node-exporter ready: true, restart count 0 May 15 01:37:34.521: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:37:34.521: INFO: Container tas-controller ready: true, restart count 0 May 15 01:37:34.521: INFO: Container tas-extender ready: true, restart count 0 May 15 01:37:34.521: INFO: high from sched-preemption-9569 started at 2021-05-15 01:37:06 +0000 UTC (1 container statuses recorded) May 15 01:37:34.521: INFO: Container high ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.167f19b4a67b1a1d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.167f19b4a6d5685a], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:37:35.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5383" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":10,"skipped":4059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:37:35.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:37:35.613: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:37:35.621: INFO: Waiting for terminating namespaces to be deleted... May 15 01:37:35.623: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:37:35.630: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:37:35.630: INFO: Container nodereport ready: true, restart count 0 May 15 01:37:35.630: INFO: Container reconcile ready: true, restart count 0 May 15 01:37:35.630: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:37:35.630: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:37:35.631: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container kube-multus ready: true, restart count 1 May 15 01:37:35.631: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:37:35.631: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:37:35.631: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:37:35.631: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:37:35.631: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:37:35.631: INFO: Container collectd ready: true, restart count 0 May 15 01:37:35.631: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:37:35.631: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:37:35.631: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:37:35.631: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:37:35.631: INFO: Container node-exporter ready: true, restart count 0 May 15 01:37:35.631: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:37:35.631: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:37:35.631: INFO: Container grafana ready: true, restart count 0 May 15 01:37:35.631: INFO: Container prometheus ready: true, restart count 26 May 15 01:37:35.631: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:37:35.631: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:37:35.631: INFO: low-1 from sched-preemption-9569 started at 2021-05-15 01:37:10 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container low-1 ready: true, restart count 0 May 15 01:37:35.631: INFO: medium from sched-preemption-9569 started at 2021-05-15 01:37:29 +0000 UTC (1 container statuses recorded) May 15 01:37:35.631: INFO: Container medium ready: true, restart count 0 May 15 01:37:35.631: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:37:35.640: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:37:35.640: INFO: Container nodereport ready: true, restart count 0 May 15 01:37:35.640: INFO: Container reconcile ready: true, restart count 0 May 15 01:37:35.640: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:37:35.640: INFO: Container discover ready: false, restart count 0 May 15 01:37:35.640: INFO: Container init ready: false, restart count 0 May 15 01:37:35.640: INFO: Container install ready: false, restart count 0 May 15 01:37:35.640: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:37:35.640: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:37:35.640: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container kube-multus ready: true, restart count 1 May 15 01:37:35.640: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:37:35.640: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:37:35.640: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:37:35.640: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:37:35.640: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:37:35.640: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:37:35.640: INFO: Container collectd ready: true, restart count 0 May 15 01:37:35.640: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:37:35.640: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:37:35.640: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:37:35.640: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:37:35.640: INFO: Container node-exporter ready: true, restart count 0 May 15 01:37:35.640: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:37:35.640: INFO: Container tas-controller ready: true, restart count 0 May 15 01:37:35.640: INFO: Container tas-extender ready: true, restart count 0 May 15 01:37:35.640: INFO: high from sched-preemption-9569 started at 2021-05-15 01:37:06 +0000 UTC (1 container statuses recorded) May 15 01:37:35.640: INFO: Container high ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-f30db68e-9085-46ec-acdd-9ba29ec139e0 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b4e80e6851], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6234/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b53cbce166], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.199/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b53d63671f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b5590c3e5b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 464.036879ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b55f3d87a3], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b564ecafb0], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b5d7952eba], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [without-toleration.167f19b5d8673f4a], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-rkmpb" : object "sched-pred-6234"/"default-token-rkmpb" not registered] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167f19b5d9726b73], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167f19b5d9bf21bb], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167f19b5d9726b73], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.167f19b5d9bf21bb], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b4e80e6851], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6234/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b53cbce166], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.199/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b53d63671f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b5590c3e5b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 464.036879ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b55f3d87a3], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b564ecafb0], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.167f19b5d7952eba], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [without-toleration.167f19b5d8673f4a], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-rkmpb" : object "sched-pred-6234"/"default-token-rkmpb" not registered] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.167f19b6568d12c3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6234/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-f30db68e-9085-46ec-acdd-9ba29ec139e0 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-f30db68e-9085-46ec-acdd-9ba29ec139e0 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6e26d77d-5b44-475c-a595-7aadf818af79=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:37:42.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6234" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.159 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":11,"skipped":5324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 15 01:37:42.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 15 01:37:42.774: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 15 01:37:42.782: INFO: Waiting for terminating namespaces to be deleted... May 15 01:37:42.784: INFO: Logging pods the apiserver thinks is on node node1 before test May 15 01:37:42.799: INFO: cmk-4s6dm from kube-system started at 2021-05-15 00:18:54 +0000 UTC (2 container statuses recorded) May 15 01:37:42.799: INFO: Container nodereport ready: true, restart count 0 May 15 01:37:42.799: INFO: Container reconcile ready: true, restart count 0 May 15 01:37:42.799: INFO: kube-flannel-hj8sj from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:37:42.799: INFO: Container kube-flannel ready: true, restart count 1 May 15 01:37:42.799: INFO: kube-multus-ds-amd64-jhf4c from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:37:42.799: INFO: Container kube-multus ready: true, restart count 1 May 15 01:37:42.799: INFO: kube-proxy-l7697 from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:37:42.799: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:37:42.799: INFO: nginx-proxy-node1 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:37:42.799: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:37:42.799: INFO: node-feature-discovery-worker-bw8zg from kube-system started at 2021-05-15 00:18:56 +0000 UTC (1 container statuses recorded) May 15 01:37:42.799: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:37:42.799: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4vxcc from kube-system started at 2021-05-15 00:19:00 +0000 UTC (1 container statuses recorded) May 15 01:37:42.799: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:37:42.799: INFO: collectd-mrzps from monitoring started at 2021-05-15 00:19:22 +0000 UTC (3 container statuses recorded) May 15 01:37:42.799: INFO: Container collectd ready: true, restart count 0 May 15 01:37:42.799: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:37:42.799: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:37:42.799: INFO: node-exporter-flvqz from monitoring started at 2021-05-15 00:18:55 +0000 UTC (2 container statuses recorded) May 15 01:37:42.799: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:37:42.799: INFO: Container node-exporter ready: true, restart count 0 May 15 01:37:42.799: INFO: prometheus-k8s-0 from monitoring started at 2021-05-15 00:19:01 +0000 UTC (5 container statuses recorded) May 15 01:37:42.799: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 15 01:37:42.799: INFO: Container grafana ready: true, restart count 0 May 15 01:37:42.799: INFO: Container prometheus ready: true, restart count 26 May 15 01:37:42.800: INFO: Container prometheus-config-reloader ready: true, restart count 0 May 15 01:37:42.800: INFO: Container rules-configmap-reloader ready: true, restart count 0 May 15 01:37:42.800: INFO: low-1 from sched-preemption-9569 started at 2021-05-15 01:37:10 +0000 UTC (1 container statuses recorded) May 15 01:37:42.800: INFO: Container low-1 ready: false, restart count 0 May 15 01:37:42.800: INFO: medium from sched-preemption-9569 started at 2021-05-15 01:37:29 +0000 UTC (1 container statuses recorded) May 15 01:37:42.800: INFO: Container medium ready: false, restart count 0 May 15 01:37:42.800: INFO: Logging pods the apiserver thinks is on node node2 before test May 15 01:37:42.810: INFO: cmk-d2qwf from kube-system started at 2021-05-14 20:09:04 +0000 UTC (2 container statuses recorded) May 15 01:37:42.810: INFO: Container nodereport ready: true, restart count 0 May 15 01:37:42.810: INFO: Container reconcile ready: true, restart count 0 May 15 01:37:42.810: INFO: cmk-init-discover-node2-j75ff from kube-system started at 2021-05-14 20:08:41 +0000 UTC (3 container statuses recorded) May 15 01:37:42.810: INFO: Container discover ready: false, restart count 0 May 15 01:37:42.810: INFO: Container init ready: false, restart count 0 May 15 01:37:42.810: INFO: Container install ready: false, restart count 0 May 15 01:37:42.810: INFO: cmk-webhook-6c9d5f8578-pjgxh from kube-system started at 2021-05-14 20:09:04 +0000 UTC (1 container statuses recorded) May 15 01:37:42.810: INFO: Container cmk-webhook ready: true, restart count 0 May 15 01:37:42.810: INFO: kube-flannel-rqcwp from kube-system started at 2021-05-14 19:58:58 +0000 UTC (1 container statuses recorded) May 15 01:37:42.810: INFO: Container kube-flannel ready: true, restart count 4 May 15 01:37:42.811: INFO: kube-multus-ds-amd64-n7cb2 from kube-system started at 2021-05-14 19:59:07 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container kube-multus ready: true, restart count 1 May 15 01:37:42.811: INFO: kube-proxy-t524z from kube-system started at 2021-05-14 19:58:24 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container kube-proxy ready: true, restart count 2 May 15 01:37:42.811: INFO: kubernetes-dashboard-86c6f9df5b-ndntg from kube-system started at 2021-05-14 19:59:31 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 15 01:37:42.811: INFO: nginx-proxy-node2 from kube-system started at 2021-05-14 20:05:10 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container nginx-proxy ready: true, restart count 2 May 15 01:37:42.811: INFO: node-feature-discovery-worker-76m6b from kube-system started at 2021-05-14 20:05:42 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container nfd-worker ready: true, restart count 0 May 15 01:37:42.811: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-2c2pw from kube-system started at 2021-05-14 20:06:38 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container kube-sriovdp ready: true, restart count 0 May 15 01:37:42.811: INFO: collectd-xzrgs from monitoring started at 2021-05-14 20:15:36 +0000 UTC (3 container statuses recorded) May 15 01:37:42.811: INFO: Container collectd ready: true, restart count 0 May 15 01:37:42.811: INFO: Container collectd-exporter ready: true, restart count 0 May 15 01:37:42.811: INFO: Container rbac-proxy ready: true, restart count 0 May 15 01:37:42.811: INFO: node-exporter-rnd5f from monitoring started at 2021-05-14 20:09:56 +0000 UTC (2 container statuses recorded) May 15 01:37:42.811: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 15 01:37:42.811: INFO: Container node-exporter ready: true, restart count 0 May 15 01:37:42.811: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-l5hlq from monitoring started at 2021-05-14 20:12:48 +0000 UTC (2 container statuses recorded) May 15 01:37:42.811: INFO: Container tas-controller ready: true, restart count 0 May 15 01:37:42.811: INFO: Container tas-extender ready: true, restart count 0 May 15 01:37:42.811: INFO: still-no-tolerations from sched-pred-6234 started at 2021-05-15 01:37:41 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container still-no-tolerations ready: false, restart count 0 May 15 01:37:42.811: INFO: high from sched-preemption-9569 started at 2021-05-15 01:37:06 +0000 UTC (1 container statuses recorded) May 15 01:37:42.811: INFO: Container high ready: false, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3a3d453c-af8a-4085-863d-8044dc7de39e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-3a3d453c-af8a-4085-863d-8044dc7de39e off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3a3d453c-af8a-4085-863d-8044dc7de39e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 15 01:37:52.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3969" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.138 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":12,"skipped":5465,"failed":0} SSSSSSSMay 15 01:37:52.892: INFO: Running AfterSuite actions on all nodes May 15 01:37:52.892: INFO: Running AfterSuite actions on node 1 May 15 01:37:52.892: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 538.508 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 8m59.693883728s Test Suite Passed