I0513 23:52:09.353548 23 e2e.go:129] Starting e2e run "3e053959-f0dc-46c7-a9b5-c92ff8891a06" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1652485928 - Will randomize all specs Will run 13 of 5773 specs May 13 23:52:09.368: INFO: >>> kubeConfig: /root/.kube/config May 13 23:52:09.373: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 13 23:52:09.400: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 23:52:09.456: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 23:52:09.456: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 23:52:09.456: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 23:52:09.456: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 23:52:09.456: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 13 23:52:09.472: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 13 23:52:09.473: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 13 23:52:09.473: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 13 23:52:09.473: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 13 23:52:09.473: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 13 23:52:09.473: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 13 23:52:09.473: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 13 23:52:09.473: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 13 23:52:09.473: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 13 23:52:09.473: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 13 23:52:09.473: INFO: e2e test version: v1.21.9 May 13 23:52:09.474: INFO: kube-apiserver version: v1.21.1 May 13 23:52:09.474: INFO: >>> kubeConfig: /root/.kube/config May 13 23:52:09.485: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:52:09.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority W0513 23:52:09.522200 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:52:09.522: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:52:09.525: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 13 23:52:09.527: INFO: Waiting up to 1m0s for all nodes to be ready May 13 23:53:09.578: INFO: Waiting for terminating namespaces to be deleted... May 13 23:53:09.581: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 23:53:09.599: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 23:53:09.600: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 23:53:09.600: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 23:53:09.600: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 23:53:09.617: INFO: ComputeCPUMemFraction for node: node1 May 13 23:53:09.617: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:53:09.617: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:53:09.617: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:53:09.617: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:53:09.617: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:53:09.617: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:53:09.617: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:53:09.617: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:53:09.617: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 13 23:53:09.617: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 13 23:53:09.617: INFO: ComputeCPUMemFraction for node: node2 May 13 23:53:09.617: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:53:09.617: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:53:09.617: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:53:09.617: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:53:09.617: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:53:09.617: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:53:09.617: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:53:09.618: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:53:09.618: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:53:09.618: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:53:09.618: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:53:09.618: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 13 23:53:09.618: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 May 13 23:53:09.635: INFO: ComputeCPUMemFraction for node: node1 May 13 23:53:09.635: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:53:09.635: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:53:09.635: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:53:09.635: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:53:09.635: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:53:09.635: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:53:09.635: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:53:09.635: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:53:09.635: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 13 23:53:09.635: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 13 23:53:09.635: INFO: ComputeCPUMemFraction for node: node2 May 13 23:53:09.635: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:53:09.635: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:53:09.635: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:53:09.635: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:53:09.635: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:53:09.635: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:53:09.635: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:53:09.635: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:53:09.635: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:53:09.635: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 13 23:53:09.635: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 May 13 23:53:09.649: INFO: Waiting for running... May 13 23:53:09.653: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 13 23:53:14.721: INFO: ComputeCPUMemFraction for node: node1 May 13 23:53:14.721: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:53:14.721: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:53:14.721: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:53:14.721: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:53:14.721: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:53:14.721: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:53:14.721: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:53:14.721: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:53:14.721: INFO: Pod for on the node: 410a1ecb-e4df-42a9-bdc1-5f4bdb2f9c85-0, Cpu: 37563, Mem: 87680079872 May 13 23:53:14.721: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 13 23:53:14.721: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 13 23:53:14.721: INFO: ComputeCPUMemFraction for node: node2 May 13 23:53:14.721: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:53:14.721: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:53:14.721: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:53:14.721: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:53:14.721: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:53:14.721: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:53:14.721: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:53:14.722: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:53:14.722: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:53:14.722: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:53:14.722: INFO: Pod for on the node: a761db5c-8e7c-4791-8941-7d69995f8e3b-0, Cpu: 37813, Mem: 88635369472 May 13 23:53:14.722: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 13 23:53:14.722: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3820 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3820, will wait for the garbage collector to delete the pods May 13 23:53:20.903: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 3.997893ms May 13 23:53:21.003: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.737515ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:53:32.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3820" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:83.135 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":1,"skipped":859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:53:32.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 23:53:32.675: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 23:53:32.684: INFO: Waiting for terminating namespaces to be deleted... May 13 23:53:32.687: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 23:53:32.697: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 23:53:32.697: INFO: Container discover ready: false, restart count 0 May 13 23:53:32.697: INFO: Container init ready: false, restart count 0 May 13 23:53:32.697: INFO: Container install ready: false, restart count 0 May 13 23:53:32.697: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:32.697: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:32.697: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:32.697: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container cmk-webhook ready: true, restart count 0 May 13 23:53:32.697: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:32.697: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:32.697: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:32.697: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 23:53:32.697: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 23:53:32.697: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:32.697: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:32.697: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:32.697: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:32.697: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:32.697: INFO: Container collectd ready: true, restart count 0 May 13 23:53:32.697: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:32.697: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:32.697: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:32.697: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:32.697: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:32.697: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 23:53:32.697: INFO: Container config-reloader ready: true, restart count 0 May 13 23:53:32.697: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 23:53:32.697: INFO: Container grafana ready: true, restart count 0 May 13 23:53:32.697: INFO: Container prometheus ready: true, restart count 1 May 13 23:53:32.697: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 23:53:32.704: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 23:53:32.704: INFO: Container discover ready: false, restart count 0 May 13 23:53:32.704: INFO: Container init ready: false, restart count 0 May 13 23:53:32.704: INFO: Container install ready: false, restart count 0 May 13 23:53:32.704: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:32.704: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:32.704: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:32.704: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:32.704: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:32.704: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:32.704: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:32.704: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:32.704: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:32.704: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:32.704: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:32.704: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:32.704: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:32.704: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:32.704: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:32.704: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:32.704: INFO: Container collectd ready: true, restart count 0 May 13 23:53:32.704: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:32.705: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:32.705: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:32.705: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:32.705: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:32.705: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 23:53:32.705: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:32.705: INFO: Container prometheus-operator ready: true, restart count 0 May 13 23:53:32.705: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 23:53:32.705: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:53:46.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4632" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.179 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":2,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:53:46.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 23:53:46.855: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 23:53:46.864: INFO: Waiting for terminating namespaces to be deleted... May 13 23:53:46.866: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 23:53:46.874: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 23:53:46.874: INFO: Container discover ready: false, restart count 0 May 13 23:53:46.874: INFO: Container init ready: false, restart count 0 May 13 23:53:46.874: INFO: Container install ready: false, restart count 0 May 13 23:53:46.874: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:46.874: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:46.874: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:46.874: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container cmk-webhook ready: true, restart count 0 May 13 23:53:46.874: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:46.874: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:46.874: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:46.874: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 23:53:46.874: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 23:53:46.874: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:46.874: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:46.874: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:46.874: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:46.874: INFO: Container collectd ready: true, restart count 0 May 13 23:53:46.874: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:46.874: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:46.874: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:46.874: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:46.874: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:46.874: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 23:53:46.874: INFO: Container config-reloader ready: true, restart count 0 May 13 23:53:46.874: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 23:53:46.874: INFO: Container grafana ready: true, restart count 0 May 13 23:53:46.874: INFO: Container prometheus ready: true, restart count 1 May 13 23:53:46.874: INFO: rs-e2e-pts-filter-fkfx5 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container e2e-pts-filter ready: true, restart count 0 May 13 23:53:46.874: INFO: rs-e2e-pts-filter-pn7z2 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:46.874: INFO: Container e2e-pts-filter ready: true, restart count 0 May 13 23:53:46.874: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 23:53:46.883: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 23:53:46.883: INFO: Container discover ready: false, restart count 0 May 13 23:53:46.883: INFO: Container init ready: false, restart count 0 May 13 23:53:46.883: INFO: Container install ready: false, restart count 0 May 13 23:53:46.883: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:46.883: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:46.883: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:46.883: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:46.883: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:46.883: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:46.883: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:46.883: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:46.883: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:46.883: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:46.883: INFO: Container collectd ready: true, restart count 0 May 13 23:53:46.883: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:46.883: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:46.883: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:46.883: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:46.883: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:46.883: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 23:53:46.883: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:46.883: INFO: Container prometheus-operator ready: true, restart count 0 May 13 23:53:46.883: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container tas-extender ready: true, restart count 0 May 13 23:53:46.883: INFO: rs-e2e-pts-filter-2fcmb from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container e2e-pts-filter ready: true, restart count 0 May 13 23:53:46.883: INFO: rs-e2e-pts-filter-bjjw4 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:46.883: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-89b34c7a-650d-45f0-b1ca-50362b723c8a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-89b34c7a-650d-45f0-b1ca-50362b723c8a off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-89b34c7a-650d-45f0-b1ca-50362b723c8a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:53:54.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9189" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.136 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":3,"skipped":1810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:53:54.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 23:53:54.995: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 23:53:55.003: INFO: Waiting for terminating namespaces to be deleted... May 13 23:53:55.005: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 23:53:55.013: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 23:53:55.013: INFO: Container discover ready: false, restart count 0 May 13 23:53:55.013: INFO: Container init ready: false, restart count 0 May 13 23:53:55.013: INFO: Container install ready: false, restart count 0 May 13 23:53:55.013: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:55.013: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:55.013: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:55.013: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container cmk-webhook ready: true, restart count 0 May 13 23:53:55.013: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:55.013: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:55.013: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:55.013: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 23:53:55.013: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 23:53:55.013: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:55.013: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:55.013: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:55.013: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:55.013: INFO: Container collectd ready: true, restart count 0 May 13 23:53:55.013: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:55.013: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:55.013: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:55.013: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:55.013: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:55.013: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 23:53:55.013: INFO: Container config-reloader ready: true, restart count 0 May 13 23:53:55.013: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 23:53:55.013: INFO: Container grafana ready: true, restart count 0 May 13 23:53:55.013: INFO: Container prometheus ready: true, restart count 1 May 13 23:53:55.013: INFO: rs-e2e-pts-filter-fkfx5 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container e2e-pts-filter ready: false, restart count 0 May 13 23:53:55.013: INFO: rs-e2e-pts-filter-pn7z2 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container e2e-pts-filter ready: false, restart count 0 May 13 23:53:55.013: INFO: with-labels from sched-pred-9189 started at 2022-05-13 23:53:50 +0000 UTC (1 container statuses recorded) May 13 23:53:55.013: INFO: Container with-labels ready: true, restart count 0 May 13 23:53:55.013: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 23:53:55.023: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 23:53:55.023: INFO: Container discover ready: false, restart count 0 May 13 23:53:55.023: INFO: Container init ready: false, restart count 0 May 13 23:53:55.023: INFO: Container install ready: false, restart count 0 May 13 23:53:55.023: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:55.023: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:55.023: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:55.023: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:55.023: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:55.023: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:55.023: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:55.023: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:55.023: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:55.023: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:55.023: INFO: Container collectd ready: true, restart count 0 May 13 23:53:55.023: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:55.023: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:55.023: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:55.023: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:55.023: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:55.023: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 23:53:55.023: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:55.023: INFO: Container prometheus-operator ready: true, restart count 0 May 13 23:53:55.023: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container tas-extender ready: true, restart count 0 May 13 23:53:55.023: INFO: rs-e2e-pts-filter-2fcmb from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container e2e-pts-filter ready: false, restart count 0 May 13 23:53:55.023: INFO: rs-e2e-pts-filter-bjjw4 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:55.023: INFO: Container e2e-pts-filter ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16eecf4b2bc413a1], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:53:56.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-44" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":4,"skipped":2251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:53:56.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 23:53:56.099: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 23:53:56.107: INFO: Waiting for terminating namespaces to be deleted... May 13 23:53:56.109: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 23:53:56.120: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 23:53:56.120: INFO: Container discover ready: false, restart count 0 May 13 23:53:56.120: INFO: Container init ready: false, restart count 0 May 13 23:53:56.120: INFO: Container install ready: false, restart count 0 May 13 23:53:56.120: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:56.120: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:56.120: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:56.120: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container cmk-webhook ready: true, restart count 0 May 13 23:53:56.120: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:56.120: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:56.120: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:56.120: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 23:53:56.120: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 23:53:56.120: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:56.120: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:56.120: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:56.120: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:56.120: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:56.120: INFO: Container collectd ready: true, restart count 0 May 13 23:53:56.120: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:56.120: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:56.120: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:56.120: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:56.120: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:56.120: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 23:53:56.120: INFO: Container config-reloader ready: true, restart count 0 May 13 23:53:56.120: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 23:53:56.120: INFO: Container grafana ready: true, restart count 0 May 13 23:53:56.120: INFO: Container prometheus ready: true, restart count 1 May 13 23:53:56.120: INFO: rs-e2e-pts-filter-fkfx5 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:56.121: INFO: Container e2e-pts-filter ready: false, restart count 0 May 13 23:53:56.121: INFO: rs-e2e-pts-filter-pn7z2 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:56.121: INFO: Container e2e-pts-filter ready: false, restart count 0 May 13 23:53:56.121: INFO: with-labels from sched-pred-9189 started at 2022-05-13 23:53:50 +0000 UTC (1 container statuses recorded) May 13 23:53:56.121: INFO: Container with-labels ready: true, restart count 0 May 13 23:53:56.121: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 23:53:56.128: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 23:53:56.128: INFO: Container discover ready: false, restart count 0 May 13 23:53:56.128: INFO: Container init ready: false, restart count 0 May 13 23:53:56.128: INFO: Container install ready: false, restart count 0 May 13 23:53:56.128: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:53:56.128: INFO: Container nodereport ready: true, restart count 0 May 13 23:53:56.128: INFO: Container reconcile ready: true, restart count 0 May 13 23:53:56.128: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:53:56.128: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container kube-multus ready: true, restart count 1 May 13 23:53:56.128: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:53:56.128: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:53:56.128: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:53:56.128: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:53:56.128: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:53:56.128: INFO: Container collectd ready: true, restart count 0 May 13 23:53:56.128: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:53:56.128: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:53:56.128: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:53:56.128: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:56.128: INFO: Container node-exporter ready: true, restart count 0 May 13 23:53:56.128: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 23:53:56.128: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:53:56.128: INFO: Container prometheus-operator ready: true, restart count 0 May 13 23:53:56.128: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container tas-extender ready: true, restart count 0 May 13 23:53:56.128: INFO: rs-e2e-pts-filter-2fcmb from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container e2e-pts-filter ready: false, restart count 0 May 13 23:53:56.128: INFO: rs-e2e-pts-filter-bjjw4 from sched-pred-4632 started at 2022-05-13 23:53:40 +0000 UTC (1 container statuses recorded) May 13 23:53:56.128: INFO: Container e2e-pts-filter ready: false, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1c543325-f8f0-42bf-a591-ffbe901e51b4=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-6757a547-0432-46b8-9e7d-c1eb1d9d7d73 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-6757a547-0432-46b8-9e7d-c1eb1d9d7d73 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-6757a547-0432-46b8-9e7d-c1eb1d9d7d73 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1c543325-f8f0-42bf-a591-ffbe901e51b4=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:54:04.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4408" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.173 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":5,"skipped":2891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:54:04.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 23:54:04.275: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 23:54:04.284: INFO: Waiting for terminating namespaces to be deleted... May 13 23:54:04.286: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 23:54:04.297: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 23:54:04.297: INFO: Container discover ready: false, restart count 0 May 13 23:54:04.297: INFO: Container init ready: false, restart count 0 May 13 23:54:04.297: INFO: Container install ready: false, restart count 0 May 13 23:54:04.297: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:54:04.297: INFO: Container nodereport ready: true, restart count 0 May 13 23:54:04.297: INFO: Container reconcile ready: true, restart count 0 May 13 23:54:04.297: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container cmk-webhook ready: true, restart count 0 May 13 23:54:04.297: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:54:04.297: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container kube-multus ready: true, restart count 1 May 13 23:54:04.297: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:54:04.297: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 23:54:04.297: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 23:54:04.297: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:54:04.297: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:54:04.297: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:54:04.297: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:54:04.297: INFO: Container collectd ready: true, restart count 0 May 13 23:54:04.297: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:54:04.297: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:54:04.297: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:54:04.297: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:54:04.297: INFO: Container node-exporter ready: true, restart count 0 May 13 23:54:04.297: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 23:54:04.297: INFO: Container config-reloader ready: true, restart count 0 May 13 23:54:04.297: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 23:54:04.297: INFO: Container grafana ready: true, restart count 0 May 13 23:54:04.297: INFO: Container prometheus ready: true, restart count 1 May 13 23:54:04.297: INFO: with-labels from sched-pred-9189 started at 2022-05-13 23:53:50 +0000 UTC (1 container statuses recorded) May 13 23:54:04.297: INFO: Container with-labels ready: false, restart count 0 May 13 23:54:04.297: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 23:54:04.304: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 23:54:04.304: INFO: Container discover ready: false, restart count 0 May 13 23:54:04.304: INFO: Container init ready: false, restart count 0 May 13 23:54:04.304: INFO: Container install ready: false, restart count 0 May 13 23:54:04.304: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:54:04.304: INFO: Container nodereport ready: true, restart count 0 May 13 23:54:04.304: INFO: Container reconcile ready: true, restart count 0 May 13 23:54:04.304: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:54:04.304: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container kube-multus ready: true, restart count 1 May 13 23:54:04.304: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:54:04.304: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:54:04.304: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:54:04.304: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:54:04.304: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:54:04.304: INFO: Container collectd ready: true, restart count 0 May 13 23:54:04.304: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:54:04.304: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:54:04.304: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:54:04.304: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:54:04.304: INFO: Container node-exporter ready: true, restart count 0 May 13 23:54:04.304: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 23:54:04.304: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:54:04.304: INFO: Container prometheus-operator ready: true, restart count 0 May 13 23:54:04.304: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container tas-extender ready: true, restart count 0 May 13 23:54:04.304: INFO: with-tolerations from sched-pred-4408 started at 2022-05-13 23:54:00 +0000 UTC (1 container statuses recorded) May 13 23:54:04.304: INFO: Container with-tolerations ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 May 13 23:54:04.349: INFO: Pod cmk-qhbd6 requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod cmk-tfblh requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod cmk-webhook-6c9d5f8578-59hj6 requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod kube-flannel-lv9xf requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod kube-flannel-xfj7m requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod kube-multus-ds-amd64-dtt2x requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod kube-multus-ds-amd64-l7nx2 requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod kube-proxy-rs2zg requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod kube-proxy-wkzbm requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod kubernetes-dashboard-785dcbb76d-tcgth requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod kubernetes-metrics-scraper-5558854cb-2bw7v requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod node-feature-discovery-worker-cxxqf requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod node-feature-discovery-worker-l459c requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod collectd-9gqhr requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod collectd-p26j2 requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod node-exporter-42x8d requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod node-exporter-n5snd requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Pod prometheus-operator-585ccfb458-vrwnp requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node node2 May 13 23:54:04.349: INFO: Pod with-labels requesting local ephemeral resource =0 on Node node1 May 13 23:54:04.349: INFO: Using pod capacity: 40608090249 May 13 23:54:04.349: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 May 13 23:54:04.349: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one May 13 23:54:04.535: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16eecf4d56f68fc1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16eecf4e6fbad38e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16eecf4e838868d7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 332.223826ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16eecf4e93e69792], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16eecf4ef9fe1d9e], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16eecf4d5728fe82], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16eecf4ee2d7a599], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16eecf4efa896b79], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 397.520623ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16eecf4f1f7971bf], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16eecf4f80d627d6], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16eecf4d5c14285b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16eecf4f02ed6523], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16eecf4f1897a94f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 363.472325ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16eecf4f25d80799], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16eecf4f46aaa5d5], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16eecf4d5c9722f9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16eecf4e827054bc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16eecf4e93038b9d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 278.072752ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16eecf4eba47d400], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16eecf4ef2409181], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16eecf4d5d13a065], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16eecf4f44458973], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16eecf4f69709785], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 623.573252ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16eecf4f7190fa35], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16eecf4f9ce17b5c], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16eecf4d5da56775], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16eecf4f79df6ff4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16eecf4f8fb3d446], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 366.23378ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16eecf4fa59f3a87], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16eecf4fad2c82e7], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16eecf4d5e4404ba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16eecf4f8b40268b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16eecf4fbd6ea473], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 841.901888ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16eecf4fc37a7a5f], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16eecf4fca31061f], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16eecf4d5ec2c55d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16eecf4ed2f6e23a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16eecf4ef4c49b7b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 567.119491ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16eecf4f1d3674df], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16eecf4f46b9499d], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16eecf4d5f668a4d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16eecf4f45100bf9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16eecf4f8fee25bc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.256062015s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16eecf4f973b2f1e], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16eecf4fa6ed6e37], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16eecf4d5fe6351a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-17 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16eecf4fa3cb279d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16eecf4fdfefd5b6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.009028992s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16eecf4fe6474de9], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16eecf4feda3eb5a], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16eecf4d605e9d99], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16eecf4f43aaf62a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16eecf4f56786f34], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 315.447405ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16eecf4f69f377b8], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16eecf4f8e06a6a6], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16eecf4d60fb0767], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16eecf4f696d2f16], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16eecf4fa2154119], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 950.530159ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16eecf4fabccf920], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16eecf4fb2dd116b], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16eecf4d57c90e99], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16eecf4e0329a62a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16eecf4e15ebe530], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 314.713442ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16eecf4e3975a214], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16eecf4e83869d4d], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16eecf4d5853ba0e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16eecf4e82b51747], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16eecf4ea3f3fa32], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 557.757094ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16eecf4ebcc7488c], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16eecf4f09e140a1], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16eecf4d58db79a6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16eecf4de77173c4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16eecf4e00e3f8ec], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 426.922206ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16eecf4e38ba67bb], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16eecf4e89f7a027], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16eecf4d597e9121], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-5 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16eecf4e896a9fd3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16eecf4eba43da3d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 819.530086ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16eecf4ef23d2b86], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16eecf4f11d9e5a1], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16eecf4d5a00b14d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16eecf4e066f3ee0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16eecf4e2c464cb8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 634.842906ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16eecf4e837dd598], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16eecf4eea822e24], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16eecf4d5a8f50d1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16eecf4f875f9a4a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16eecf4fa967e753], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 570.961707ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16eecf4fb0cef23e], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16eecf4fb7158ba8], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16eecf4d5b14ff48], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16eecf4f07cdad72], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16eecf4f1c4841c2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 343.570446ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16eecf4f2d26af2f], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16eecf4fa3e9967b], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16eecf4d5b95bf5e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4913/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16eecf4f8b431807], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16eecf4fce93d0cb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.129350291s] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16eecf4fd4ed5afa], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16eecf4fdb76f621], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16eecf50e35651fe], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:54:20.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4913" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.377 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":6,"skipped":3145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:54:20.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 May 13 23:54:20.676: INFO: Waiting up to 1m0s for all nodes to be ready May 13 23:55:20.734: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:56:03.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1997" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:102.419 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":7,"skipped":3851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:56:03.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 13 23:56:03.088: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 13 23:56:03.097: INFO: Waiting for terminating namespaces to be deleted... May 13 23:56:03.099: INFO: Logging pods the apiserver thinks is on node node1 before test May 13 23:56:03.107: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 13 23:56:03.107: INFO: Container discover ready: false, restart count 0 May 13 23:56:03.107: INFO: Container init ready: false, restart count 0 May 13 23:56:03.107: INFO: Container install ready: false, restart count 0 May 13 23:56:03.107: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:56:03.107: INFO: Container nodereport ready: true, restart count 0 May 13 23:56:03.107: INFO: Container reconcile ready: true, restart count 0 May 13 23:56:03.107: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container cmk-webhook ready: true, restart count 0 May 13 23:56:03.107: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:56:03.107: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container kube-multus ready: true, restart count 1 May 13 23:56:03.107: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:56:03.107: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 13 23:56:03.107: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 13 23:56:03.107: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:56:03.107: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:56:03.107: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:56:03.107: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:56:03.107: INFO: Container collectd ready: true, restart count 0 May 13 23:56:03.107: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:56:03.107: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:56:03.107: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:56:03.107: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:56:03.107: INFO: Container node-exporter ready: true, restart count 0 May 13 23:56:03.107: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 13 23:56:03.107: INFO: Container config-reloader ready: true, restart count 0 May 13 23:56:03.107: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 13 23:56:03.107: INFO: Container grafana ready: true, restart count 0 May 13 23:56:03.107: INFO: Container prometheus ready: true, restart count 1 May 13 23:56:03.107: INFO: low-1 from sched-preemption-1997 started at 2022-05-13 23:55:36 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container low-1 ready: true, restart count 0 May 13 23:56:03.107: INFO: medium from sched-preemption-1997 started at 2022-05-13 23:55:52 +0000 UTC (1 container statuses recorded) May 13 23:56:03.107: INFO: Container medium ready: true, restart count 0 May 13 23:56:03.107: INFO: Logging pods the apiserver thinks is on node node2 before test May 13 23:56:03.128: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 13 23:56:03.128: INFO: Container discover ready: false, restart count 0 May 13 23:56:03.128: INFO: Container init ready: false, restart count 0 May 13 23:56:03.128: INFO: Container install ready: false, restart count 0 May 13 23:56:03.128: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 13 23:56:03.128: INFO: Container nodereport ready: true, restart count 0 May 13 23:56:03.128: INFO: Container reconcile ready: true, restart count 0 May 13 23:56:03.128: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container kube-flannel ready: true, restart count 2 May 13 23:56:03.128: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container kube-multus ready: true, restart count 1 May 13 23:56:03.128: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container kube-proxy ready: true, restart count 2 May 13 23:56:03.128: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container nginx-proxy ready: true, restart count 2 May 13 23:56:03.128: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container nfd-worker ready: true, restart count 0 May 13 23:56:03.128: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container kube-sriovdp ready: true, restart count 0 May 13 23:56:03.128: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 13 23:56:03.128: INFO: Container collectd ready: true, restart count 0 May 13 23:56:03.128: INFO: Container collectd-exporter ready: true, restart count 0 May 13 23:56:03.128: INFO: Container rbac-proxy ready: true, restart count 0 May 13 23:56:03.128: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 13 23:56:03.128: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:56:03.128: INFO: Container node-exporter ready: true, restart count 0 May 13 23:56:03.128: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 13 23:56:03.128: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 13 23:56:03.128: INFO: Container prometheus-operator ready: true, restart count 0 May 13 23:56:03.128: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container tas-extender ready: true, restart count 0 May 13 23:56:03.128: INFO: high from sched-preemption-1997 started at 2022-05-13 23:55:32 +0000 UTC (1 container statuses recorded) May 13 23:56:03.128: INFO: Container high ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-eaee4e3c-c900-4297-be77-465bad508cb3=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-0424e03e-a889-43fd-aa54-fab9066779ca testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf68fee95d19], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7896/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69581f154c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf696a024941], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 300.095479ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69716d0bc5], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69795afbd7], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69ee2c6ff6], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16eecf69efe1e686], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-eaee4e3c-c900-4297-be77-465bad508cb3: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16eecf69efe1e686], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-eaee4e3c-c900-4297-be77-465bad508cb3: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf68fee95d19], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7896/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69581f154c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf696a024941], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 300.095479ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69716d0bc5], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69795afbd7], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16eecf69ee2c6ff6], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-eaee4e3c-c900-4297-be77-465bad508cb3=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16eecf6a2f98fb9c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7896/still-no-tolerations to node1] STEP: removing the label kubernetes.io/e2e-label-key-0424e03e-a889-43fd-aa54-fab9066779ca off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-0424e03e-a889-43fd-aa54-fab9066779ca STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-eaee4e3c-c900-4297-be77-465bad508cb3=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:56:09.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7896" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":8,"skipped":4193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:56:09.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 13 23:56:09.272: INFO: Waiting up to 1m0s for all nodes to be ready May 13 23:57:09.326: INFO: Waiting for terminating namespaces to be deleted... May 13 23:57:09.329: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 23:57:09.347: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 23:57:09.347: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 23:57:09.347: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 23:57:09.347: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 23:57:09.364: INFO: ComputeCPUMemFraction for node: node1 May 13 23:57:09.364: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:57:09.364: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:57:09.364: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:57:09.364: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:57:09.364: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:57:09.364: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:57:09.364: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:57:09.364: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:57:09.364: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:57:09.364: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:57:09.364: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:57:09.364: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:57:09.364: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:57:09.364: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:57:09.364: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 13 23:57:09.364: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 13 23:57:09.364: INFO: ComputeCPUMemFraction for node: node2 May 13 23:57:09.364: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:57:09.364: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:57:09.364: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:57:09.365: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:57:09.365: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:57:09.365: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:57:09.365: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:57:09.365: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:57:09.365: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:57:09.365: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:57:09.365: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:57:09.365: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:57:09.365: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 13 23:57:09.365: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname May 13 23:57:13.409: INFO: ComputeCPUMemFraction for node: node1 May 13 23:57:13.409: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:57:13.409: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:57:13.409: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:57:13.409: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:57:13.409: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:57:13.409: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:57:13.409: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:57:13.409: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:57:13.409: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 13 23:57:13.409: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 13 23:57:13.409: INFO: ComputeCPUMemFraction for node: node2 May 13 23:57:13.409: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:57:13.409: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:57:13.409: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:57:13.409: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:57:13.409: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:57:13.409: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:57:13.409: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:57:13.409: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:57:13.410: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:57:13.410: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 13 23:57:13.410: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 May 13 23:57:13.420: INFO: Waiting for running... May 13 23:57:13.426: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 13 23:57:18.493: INFO: ComputeCPUMemFraction for node: node1 May 13 23:57:18.493: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:57:18.493: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:57:18.493: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:57:18.493: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:57:18.493: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:57:18.493: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:57:18.493: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:57:18.493: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:57:18.493: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:57:18.493: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:57:18.493: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:57:18.493: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:57:18.493: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:57:18.493: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:57:18.493: INFO: Pod for on the node: 9fc6e2bc-299b-4755-986f-34284dd01f3a-0, Cpu: 45263, Mem: 105568540672 May 13 23:57:18.493: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 May 13 23:57:18.494: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 13 23:57:18.494: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 13 23:57:18.494: INFO: ComputeCPUMemFraction for node: node2 May 13 23:57:18.494: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:57:18.494: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:57:18.494: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:57:18.494: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:57:18.494: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:57:18.494: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:57:18.494: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:57:18.494: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:57:18.494: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:57:18.494: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:57:18.494: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:57:18.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:57:18.494: INFO: Pod for on the node: ec35992b-8368-49e6-a981-cc0b45e96750-0, Cpu: 45513, Mem: 106523830271 May 13 23:57:18.494: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 May 13 23:57:18.494: INFO: Node: node2, totalRequestedMemResource: 107343347711, memAllocatableVal: 178884608000, memFraction: 0.6000703409373265 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:57:32.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4557" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:83.302 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":9,"skipped":4308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:57:32.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 13 23:57:32.589: INFO: Waiting up to 1m0s for all nodes to be ready May 13 23:58:32.650: INFO: Waiting for terminating namespaces to be deleted... May 13 23:58:32.652: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 23:58:32.670: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 23:58:32.670: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 23:58:32.670: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 23:58:32.670: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 23:58:32.688: INFO: ComputeCPUMemFraction for node: node1 May 13 23:58:32.688: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:58:32.688: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:58:32.688: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:58:32.688: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:58:32.688: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:58:32.688: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:58:32.688: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:58:32.688: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:58:32.688: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 13 23:58:32.688: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 13 23:58:32.688: INFO: ComputeCPUMemFraction for node: node2 May 13 23:58:32.688: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:58:32.688: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:58:32.688: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:58:32.688: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:58:32.688: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:58:32.688: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:58:32.688: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:58:32.688: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:58:32.688: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:58:32.688: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 13 23:58:32.688: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 May 13 23:58:40.788: INFO: ComputeCPUMemFraction for node: node2 May 13 23:58:40.788: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:58:40.788: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:58:40.788: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:58:40.788: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:58:40.788: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:58:40.788: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:58:40.788: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:58:40.788: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:58:40.788: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:58:40.789: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:58:40.789: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:58:40.789: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 13 23:58:40.789: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 May 13 23:58:40.789: INFO: ComputeCPUMemFraction for node: node1 May 13 23:58:40.789: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:58:40.789: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:58:40.789: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:58:40.789: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:58:40.789: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:58:40.789: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:58:40.789: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:58:40.789: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:58:40.789: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:58:40.789: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 13 23:58:40.789: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 13 23:58:40.798: INFO: Waiting for running... May 13 23:58:40.803: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 13 23:58:45.871: INFO: ComputeCPUMemFraction for node: node2 May 13 23:58:45.871: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 13 23:58:45.871: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 13 23:58:45.871: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 13 23:58:45.871: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 13 23:58:45.871: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 13 23:58:45.871: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 13 23:58:45.871: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 13 23:58:45.871: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: 0a2a0134-eb83-431a-8c6d-96c9b18bfe32-0, Cpu: 37813, Mem: 88635369472 May 13 23:58:45.871: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 13 23:58:45.871: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 13 23:58:45.871: INFO: ComputeCPUMemFraction for node: node1 May 13 23:58:45.871: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 13 23:58:45.871: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 13 23:58:45.871: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 13 23:58:45.871: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 13 23:58:45.871: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 13 23:58:45.871: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 13 23:58:45.871: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 13 23:58:45.871: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 13 23:58:45.871: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 13 23:58:45.871: INFO: Pod for on the node: ebc4f6c0-9bdb-41fa-8c8b-76e45fcf769b-0, Cpu: 37563, Mem: 87680079872 May 13 23:58:45.871: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 13 23:58:45.871: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:59:03.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8994" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:91.402 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":10,"skipped":4774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:59:03.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 May 13 23:59:03.997: INFO: Waiting up to 1m0s for all nodes to be ready May 14 00:00:04.050: INFO: Waiting for terminating namespaces to be deleted... May 14 00:00:04.053: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 14 00:00:04.072: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 14 00:00:04.072: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 14 00:00:04.072: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 14 00:00:04.072: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 14 00:00:04.088: INFO: ComputeCPUMemFraction for node: node1 May 14 00:00:04.088: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 14 00:00:04.088: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 14 00:00:04.088: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 14 00:00:04.088: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 14 00:00:04.088: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 14 00:00:04.088: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 14 00:00:04.088: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 14 00:00:04.088: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 14 00:00:04.088: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 14 00:00:04.088: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 14 00:00:04.088: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 14 00:00:04.088: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 14 00:00:04.088: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 14 00:00:04.088: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 14 00:00:04.088: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 14 00:00:04.089: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 14 00:00:04.089: INFO: ComputeCPUMemFraction for node: node2 May 14 00:00:04.089: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 14 00:00:04.089: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 14 00:00:04.089: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 14 00:00:04.089: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 14 00:00:04.089: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 14 00:00:04.089: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 14 00:00:04.089: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 14 00:00:04.089: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 14 00:00:04.089: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 14 00:00:04.089: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 14 00:00:04.089: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 14 00:00:04.089: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 14 00:00:04.089: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 14 00:00:04.089: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 May 14 00:00:04.107: INFO: ComputeCPUMemFraction for node: node1 May 14 00:00:04.107: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 14 00:00:04.107: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 14 00:00:04.107: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 14 00:00:04.107: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 14 00:00:04.107: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 14 00:00:04.107: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 14 00:00:04.107: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 14 00:00:04.107: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 14 00:00:04.107: INFO: Node: node1, totalRequestedCPUResource: 937, cpuAllocatableMil: 77000, cpuFraction: 0.01216883116883117 May 14 00:00:04.107: INFO: Node: node1, totalRequestedMemResource: 1774807040, memAllocatableVal: 178884608000, memFraction: 0.009921519016325877 May 14 00:00:04.107: INFO: ComputeCPUMemFraction for node: node2 May 14 00:00:04.107: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 14 00:00:04.107: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 14 00:00:04.107: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 14 00:00:04.107: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 14 00:00:04.107: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 14 00:00:04.107: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 14 00:00:04.107: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 14 00:00:04.108: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 14 00:00:04.108: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 14 00:00:04.108: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 14 00:00:04.108: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 14 00:00:04.108: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 14 00:00:04.108: INFO: Node: node2, totalRequestedCPUResource: 687, cpuAllocatableMil: 77000, cpuFraction: 0.008922077922077921 May 14 00:00:04.108: INFO: Node: node2, totalRequestedMemResource: 819517440, memAllocatableVal: 178884608000, memFraction: 0.00458126302292034 May 14 00:00:04.121: INFO: Waiting for running... May 14 00:00:04.125: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. May 14 00:00:09.196: INFO: ComputeCPUMemFraction for node: node1 May 14 00:00:09.196: INFO: Pod for on the node: cmk-init-discover-node1-m2p59, Cpu: 300, Mem: 629145600 May 14 00:00:09.196: INFO: Pod for on the node: cmk-tfblh, Cpu: 200, Mem: 419430400 May 14 00:00:09.196: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-59hj6, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: kube-flannel-xfj7m, Cpu: 150, Mem: 64000000 May 14 00:00:09.196: INFO: Pod for on the node: kube-multus-ds-amd64-dtt2x, Cpu: 100, Mem: 94371840 May 14 00:00:09.196: INFO: Pod for on the node: kube-proxy-rs2zg, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-tcgth, Cpu: 50, Mem: 64000000 May 14 00:00:09.196: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-2bw7v, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 May 14 00:00:09.196: INFO: Pod for on the node: node-feature-discovery-worker-l459c, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: collectd-p26j2, Cpu: 300, Mem: 629145600 May 14 00:00:09.196: INFO: Pod for on the node: node-exporter-42x8d, Cpu: 112, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 May 14 00:00:09.196: INFO: Pod for on the node: 80079f9a-d637-4f48-8b3e-0877badec120-0, Cpu: 37563, Mem: 87680079872 May 14 00:00:09.196: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 14 00:00:09.196: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. May 14 00:00:09.196: INFO: ComputeCPUMemFraction for node: node2 May 14 00:00:09.196: INFO: Pod for on the node: cmk-init-discover-node2-hm7r7, Cpu: 300, Mem: 629145600 May 14 00:00:09.196: INFO: Pod for on the node: cmk-qhbd6, Cpu: 200, Mem: 419430400 May 14 00:00:09.196: INFO: Pod for on the node: kube-flannel-lv9xf, Cpu: 150, Mem: 64000000 May 14 00:00:09.196: INFO: Pod for on the node: kube-multus-ds-amd64-l7nx2, Cpu: 100, Mem: 94371840 May 14 00:00:09.196: INFO: Pod for on the node: kube-proxy-wkzbm, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 May 14 00:00:09.196: INFO: Pod for on the node: node-feature-discovery-worker-cxxqf, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: collectd-9gqhr, Cpu: 300, Mem: 629145600 May 14 00:00:09.196: INFO: Pod for on the node: node-exporter-n5snd, Cpu: 112, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: prometheus-operator-585ccfb458-vrwnp, Cpu: 200, Mem: 314572800 May 14 00:00:09.196: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6, Cpu: 100, Mem: 209715200 May 14 00:00:09.196: INFO: Pod for on the node: 8b0bd6d3-bb7a-44fa-b3ef-2c209b8f6ddf-0, Cpu: 37813, Mem: 88635369472 May 14 00:00:09.196: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 May 14 00:00:09.196: INFO: Node: node2, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b91435c1-b2ed-4f52-b42f=testing-taint-value-87e19f50-937a-492e-8886-84e3347921a3:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2edbce72-d74f-42dc-9328=testing-taint-value-97ecb1e2-e176-4061-a315-0eee1a2b5908:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-07827a3d-37c1-4412-990b=testing-taint-value-06443301-f4e7-4a0a-99f5-39bb62d6f48f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-40b72bce-0791-44fa-9806=testing-taint-value-4a16545f-753f-4280-88da-22c3de85197b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5b585b9f-ee94-49d6-83d6=testing-taint-value-3186be0b-8053-4624-b559-eb3cb96a717c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d4af6e09-f9bc-4cb0-86ca=testing-taint-value-a9394be6-2e49-4ff7-b4ed-4852cad22b65:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fc1979d2-cf2b-478b-94d0=testing-taint-value-c0df407e-6dce-40c6-baf5-a708fcd6c999:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d7a570d4-e688-4b1c-b045=testing-taint-value-d9d215a4-6a10-46cf-8125-a254cfa43a96:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-80d7ef26-a4a9-4466-9aba=testing-taint-value-d5be94a2-dba3-4021-9b61-bb4d4ab0ef29:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fa57f83d-eb07-441a-a53c=testing-taint-value-4473dadd-23ec-43f9-ba51-a810ef2e2a71:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dd1ad523-629c-4999-8101=testing-taint-value-7136a438-a1d1-4b6e-b231-23db39131697:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-75114eaf-4fe3-4f1b-8619=testing-taint-value-e92ed6d9-d08d-43e1-85f9-12643ae6fee4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b7caecb5-ea7e-43d6-af5e=testing-taint-value-5d49a864-d7bd-4fce-893f-64f350f35632:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c7d52edf-0369-46cb-abb3=testing-taint-value-e0496ec1-8671-4a40-8138-696a0d2eb41a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-64bc237d-cc05-4722-b7a9=testing-taint-value-ec94c72c-5b7b-45e6-9cb8-add8fbe21b82:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bd40eb72-43a9-43e9-801a=testing-taint-value-d788d3c0-2ac6-4073-aca4-beb96fdebc08:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-53a04132-b094-4f5e-8502=testing-taint-value-f3fc724b-6669-40a5-ba39-a264721cd245:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dbc6fc57-3af7-4e25-b707=testing-taint-value-f5c6226b-f3c3-4382-83d8-762f2a7597a1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ce7a4999-e4da-48d3-8bc1=testing-taint-value-40c00f74-3f61-49e8-b8e1-d3314cc1c396:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4c35cee3-2326-4cff-8fd3=testing-taint-value-2dd25e89-6d9e-4ce3-b805-eff75652e2fd:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dd1ad523-629c-4999-8101=testing-taint-value-7136a438-a1d1-4b6e-b231-23db39131697:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-75114eaf-4fe3-4f1b-8619=testing-taint-value-e92ed6d9-d08d-43e1-85f9-12643ae6fee4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b7caecb5-ea7e-43d6-af5e=testing-taint-value-5d49a864-d7bd-4fce-893f-64f350f35632:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c7d52edf-0369-46cb-abb3=testing-taint-value-e0496ec1-8671-4a40-8138-696a0d2eb41a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-64bc237d-cc05-4722-b7a9=testing-taint-value-ec94c72c-5b7b-45e6-9cb8-add8fbe21b82:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bd40eb72-43a9-43e9-801a=testing-taint-value-d788d3c0-2ac6-4073-aca4-beb96fdebc08:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-53a04132-b094-4f5e-8502=testing-taint-value-f3fc724b-6669-40a5-ba39-a264721cd245:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dbc6fc57-3af7-4e25-b707=testing-taint-value-f5c6226b-f3c3-4382-83d8-762f2a7597a1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ce7a4999-e4da-48d3-8bc1=testing-taint-value-40c00f74-3f61-49e8-b8e1-d3314cc1c396:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4c35cee3-2326-4cff-8fd3=testing-taint-value-2dd25e89-6d9e-4ce3-b805-eff75652e2fd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b91435c1-b2ed-4f52-b42f=testing-taint-value-87e19f50-937a-492e-8886-84e3347921a3:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2edbce72-d74f-42dc-9328=testing-taint-value-97ecb1e2-e176-4061-a315-0eee1a2b5908:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-07827a3d-37c1-4412-990b=testing-taint-value-06443301-f4e7-4a0a-99f5-39bb62d6f48f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-40b72bce-0791-44fa-9806=testing-taint-value-4a16545f-753f-4280-88da-22c3de85197b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5b585b9f-ee94-49d6-83d6=testing-taint-value-3186be0b-8053-4624-b559-eb3cb96a717c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d4af6e09-f9bc-4cb0-86ca=testing-taint-value-a9394be6-2e49-4ff7-b4ed-4852cad22b65:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fc1979d2-cf2b-478b-94d0=testing-taint-value-c0df407e-6dce-40c6-baf5-a708fcd6c999:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d7a570d4-e688-4b1c-b045=testing-taint-value-d9d215a4-6a10-46cf-8125-a254cfa43a96:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-80d7ef26-a4a9-4466-9aba=testing-taint-value-d5be94a2-dba3-4021-9b61-bb4d4ab0ef29:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fa57f83d-eb07-441a-a53c=testing-taint-value-4473dadd-23ec-43f9-ba51-a810ef2e2a71:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 00:00:16.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3207" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:72.581 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":11,"skipped":5281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 14 00:00:16.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 14 00:00:16.587: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 00:00:16.595: INFO: Waiting for terminating namespaces to be deleted... May 14 00:00:16.597: INFO: Logging pods the apiserver thinks is on node node1 before test May 14 00:00:16.607: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 14 00:00:16.607: INFO: Container discover ready: false, restart count 0 May 14 00:00:16.607: INFO: Container init ready: false, restart count 0 May 14 00:00:16.607: INFO: Container install ready: false, restart count 0 May 14 00:00:16.607: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 14 00:00:16.607: INFO: Container nodereport ready: true, restart count 0 May 14 00:00:16.607: INFO: Container reconcile ready: true, restart count 0 May 14 00:00:16.607: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container cmk-webhook ready: true, restart count 0 May 14 00:00:16.607: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container kube-flannel ready: true, restart count 2 May 14 00:00:16.607: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container kube-multus ready: true, restart count 1 May 14 00:00:16.607: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container kube-proxy ready: true, restart count 2 May 14 00:00:16.607: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 14 00:00:16.607: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 14 00:00:16.607: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container nginx-proxy ready: true, restart count 2 May 14 00:00:16.607: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container nfd-worker ready: true, restart count 0 May 14 00:00:16.607: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container kube-sriovdp ready: true, restart count 0 May 14 00:00:16.607: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 14 00:00:16.607: INFO: Container collectd ready: true, restart count 0 May 14 00:00:16.607: INFO: Container collectd-exporter ready: true, restart count 0 May 14 00:00:16.607: INFO: Container rbac-proxy ready: true, restart count 0 May 14 00:00:16.607: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 14 00:00:16.607: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 14 00:00:16.607: INFO: Container node-exporter ready: true, restart count 0 May 14 00:00:16.607: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 14 00:00:16.607: INFO: Container config-reloader ready: true, restart count 0 May 14 00:00:16.607: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 14 00:00:16.607: INFO: Container grafana ready: true, restart count 0 May 14 00:00:16.607: INFO: Container prometheus ready: true, restart count 1 May 14 00:00:16.607: INFO: with-tolerations from sched-priority-3207 started at 2022-05-14 00:00:09 +0000 UTC (1 container statuses recorded) May 14 00:00:16.607: INFO: Container with-tolerations ready: true, restart count 0 May 14 00:00:16.607: INFO: Logging pods the apiserver thinks is on node node2 before test May 14 00:00:16.614: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 14 00:00:16.614: INFO: Container discover ready: false, restart count 0 May 14 00:00:16.614: INFO: Container init ready: false, restart count 0 May 14 00:00:16.614: INFO: Container install ready: false, restart count 0 May 14 00:00:16.614: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 14 00:00:16.614: INFO: Container nodereport ready: true, restart count 0 May 14 00:00:16.614: INFO: Container reconcile ready: true, restart count 0 May 14 00:00:16.614: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 14 00:00:16.614: INFO: Container kube-flannel ready: true, restart count 2 May 14 00:00:16.615: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 14 00:00:16.615: INFO: Container kube-multus ready: true, restart count 1 May 14 00:00:16.615: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 14 00:00:16.615: INFO: Container kube-proxy ready: true, restart count 2 May 14 00:00:16.615: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 14 00:00:16.615: INFO: Container nginx-proxy ready: true, restart count 2 May 14 00:00:16.615: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 14 00:00:16.615: INFO: Container nfd-worker ready: true, restart count 0 May 14 00:00:16.615: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 14 00:00:16.615: INFO: Container kube-sriovdp ready: true, restart count 0 May 14 00:00:16.615: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 14 00:00:16.615: INFO: Container collectd ready: true, restart count 0 May 14 00:00:16.615: INFO: Container collectd-exporter ready: true, restart count 0 May 14 00:00:16.615: INFO: Container rbac-proxy ready: true, restart count 0 May 14 00:00:16.615: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 14 00:00:16.615: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 14 00:00:16.615: INFO: Container node-exporter ready: true, restart count 0 May 14 00:00:16.615: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 14 00:00:16.615: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 14 00:00:16.615: INFO: Container prometheus-operator ready: true, restart count 0 May 14 00:00:16.615: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 14 00:00:16.615: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-9d1986a0-b2f0-4b34-bfbf-34061924aa6e 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-9d1986a0-b2f0-4b34-bfbf-34061924aa6e off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-9d1986a0-b2f0-4b34-bfbf-34061924aa6e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 00:00:32.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3504" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.176 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":12,"skipped":5733,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 14 00:00:32.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 14 00:00:32.759: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 14 00:00:32.767: INFO: Waiting for terminating namespaces to be deleted... May 14 00:00:32.769: INFO: Logging pods the apiserver thinks is on node node1 before test May 14 00:00:32.786: INFO: cmk-init-discover-node1-m2p59 from kube-system started at 2022-05-13 20:12:33 +0000 UTC (3 container statuses recorded) May 14 00:00:32.786: INFO: Container discover ready: false, restart count 0 May 14 00:00:32.786: INFO: Container init ready: false, restart count 0 May 14 00:00:32.786: INFO: Container install ready: false, restart count 0 May 14 00:00:32.786: INFO: cmk-tfblh from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 14 00:00:32.786: INFO: Container nodereport ready: true, restart count 0 May 14 00:00:32.786: INFO: Container reconcile ready: true, restart count 0 May 14 00:00:32.786: INFO: cmk-webhook-6c9d5f8578-59hj6 from kube-system started at 2022-05-13 20:13:16 +0000 UTC (1 container statuses recorded) May 14 00:00:32.786: INFO: Container cmk-webhook ready: true, restart count 0 May 14 00:00:32.786: INFO: kube-flannel-xfj7m from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 14 00:00:32.786: INFO: Container kube-flannel ready: true, restart count 2 May 14 00:00:32.786: INFO: kube-multus-ds-amd64-dtt2x from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 14 00:00:32.786: INFO: Container kube-multus ready: true, restart count 1 May 14 00:00:32.786: INFO: kube-proxy-rs2zg from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 14 00:00:32.786: INFO: Container kube-proxy ready: true, restart count 2 May 14 00:00:32.786: INFO: kubernetes-dashboard-785dcbb76d-tcgth from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 14 00:00:32.786: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 14 00:00:32.786: INFO: kubernetes-metrics-scraper-5558854cb-2bw7v from kube-system started at 2022-05-13 20:01:04 +0000 UTC (1 container statuses recorded) May 14 00:00:32.787: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 May 14 00:00:32.788: INFO: nginx-proxy-node1 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 14 00:00:32.788: INFO: Container nginx-proxy ready: true, restart count 2 May 14 00:00:32.788: INFO: node-feature-discovery-worker-l459c from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 14 00:00:32.788: INFO: Container nfd-worker ready: true, restart count 0 May 14 00:00:32.788: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qscxr from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 14 00:00:32.788: INFO: Container kube-sriovdp ready: true, restart count 0 May 14 00:00:32.788: INFO: collectd-p26j2 from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 14 00:00:32.788: INFO: Container collectd ready: true, restart count 0 May 14 00:00:32.788: INFO: Container collectd-exporter ready: true, restart count 0 May 14 00:00:32.788: INFO: Container rbac-proxy ready: true, restart count 0 May 14 00:00:32.788: INFO: node-exporter-42x8d from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 14 00:00:32.788: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 14 00:00:32.788: INFO: Container node-exporter ready: true, restart count 0 May 14 00:00:32.788: INFO: prometheus-k8s-0 from monitoring started at 2022-05-13 20:14:32 +0000 UTC (4 container statuses recorded) May 14 00:00:32.788: INFO: Container config-reloader ready: true, restart count 0 May 14 00:00:32.788: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 14 00:00:32.788: INFO: Container grafana ready: true, restart count 0 May 14 00:00:32.788: INFO: Container prometheus ready: true, restart count 1 May 14 00:00:32.788: INFO: pod1 from sched-pred-3504 started at 2022-05-14 00:00:20 +0000 UTC (1 container statuses recorded) May 14 00:00:32.788: INFO: Container agnhost ready: true, restart count 0 May 14 00:00:32.788: INFO: pod2 from sched-pred-3504 started at 2022-05-14 00:00:24 +0000 UTC (1 container statuses recorded) May 14 00:00:32.788: INFO: Container agnhost ready: true, restart count 0 May 14 00:00:32.788: INFO: pod3 from sched-pred-3504 started at 2022-05-14 00:00:28 +0000 UTC (1 container statuses recorded) May 14 00:00:32.788: INFO: Container agnhost ready: true, restart count 0 May 14 00:00:32.788: INFO: Logging pods the apiserver thinks is on node node2 before test May 14 00:00:32.799: INFO: cmk-init-discover-node2-hm7r7 from kube-system started at 2022-05-13 20:12:52 +0000 UTC (3 container statuses recorded) May 14 00:00:32.799: INFO: Container discover ready: false, restart count 0 May 14 00:00:32.799: INFO: Container init ready: false, restart count 0 May 14 00:00:32.799: INFO: Container install ready: false, restart count 0 May 14 00:00:32.799: INFO: cmk-qhbd6 from kube-system started at 2022-05-13 20:13:15 +0000 UTC (2 container statuses recorded) May 14 00:00:32.799: INFO: Container nodereport ready: true, restart count 0 May 14 00:00:32.799: INFO: Container reconcile ready: true, restart count 0 May 14 00:00:32.799: INFO: kube-flannel-lv9xf from kube-system started at 2022-05-13 20:00:24 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container kube-flannel ready: true, restart count 2 May 14 00:00:32.799: INFO: kube-multus-ds-amd64-l7nx2 from kube-system started at 2022-05-13 20:00:33 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container kube-multus ready: true, restart count 1 May 14 00:00:32.799: INFO: kube-proxy-wkzbm from kube-system started at 2022-05-13 19:59:27 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container kube-proxy ready: true, restart count 2 May 14 00:00:32.799: INFO: nginx-proxy-node2 from kube-system started at 2022-05-13 19:59:24 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container nginx-proxy ready: true, restart count 2 May 14 00:00:32.799: INFO: node-feature-discovery-worker-cxxqf from kube-system started at 2022-05-13 20:08:58 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container nfd-worker ready: true, restart count 0 May 14 00:00:32.799: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-fcxrt from kube-system started at 2022-05-13 20:10:11 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container kube-sriovdp ready: true, restart count 0 May 14 00:00:32.799: INFO: collectd-9gqhr from monitoring started at 2022-05-13 20:18:14 +0000 UTC (3 container statuses recorded) May 14 00:00:32.799: INFO: Container collectd ready: true, restart count 0 May 14 00:00:32.799: INFO: Container collectd-exporter ready: true, restart count 0 May 14 00:00:32.799: INFO: Container rbac-proxy ready: true, restart count 0 May 14 00:00:32.799: INFO: node-exporter-n5snd from monitoring started at 2022-05-13 20:14:18 +0000 UTC (2 container statuses recorded) May 14 00:00:32.799: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 14 00:00:32.799: INFO: Container node-exporter ready: true, restart count 0 May 14 00:00:32.799: INFO: prometheus-operator-585ccfb458-vrwnp from monitoring started at 2022-05-13 20:14:11 +0000 UTC (2 container statuses recorded) May 14 00:00:32.799: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 14 00:00:32.799: INFO: Container prometheus-operator ready: true, restart count 0 May 14 00:00:32.799: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8xcp6 from monitoring started at 2022-05-13 20:17:23 +0000 UTC (1 container statuses recorded) May 14 00:00:32.799: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd.16eecfa8b9baa048], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd.16eecfaaabb51dde], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6444/filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd.16eecfab074c1eb9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd.16eecfab1abc741f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 326.118289ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd.16eecfab22243c45], Reason = [Created], Message = [Created container filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd.16eecfab2a1827f8], Reason = [Started], Message = [Started container filler-pod-2c18ea9d-39fc-4b29-b75a-54dd118efcdd] STEP: Considering event: Type = [Normal], Name = [without-label.16eecfa7c952a48b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6444/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16eecfa8228be6fa], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16eecfa83546223b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 314.185364ms] STEP: Considering event: Type = [Normal], Name = [without-label.16eecfa8402a39f4], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16eecfa847847264], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16eecfa8b859195f], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-podd6c6d6ca-f839-4a13-9b4f-5f85cee5e385.16eecfab864af6fe], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 14 00:00:49.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6444" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.180 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":13,"skipped":5744,"failed":0} SSSSSSSSSSSSSSSSMay 14 00:00:49.918: INFO: Running AfterSuite actions on all nodes May 14 00:00:49.918: INFO: Running AfterSuite actions on node 1 May 14 00:00:49.918: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 520.555 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 8m41.910115139s Test Suite Passed