I0603 23:47:16.518970 24 e2e.go:129] Starting e2e run "9fa43479-b9b9-4902-b872-99bc37aa3999" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654300035 - Will randomize all specs Will run 13 of 5773 specs Jun 3 23:47:16.534: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:47:16.539: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 3 23:47:16.566: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:47:16.645: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:47:16.645: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:47:16.645: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:47:16.645: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:47:16.645: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 3 23:47:16.662: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 3 23:47:16.662: INFO: e2e test version: v1.21.9 Jun 3 23:47:16.663: INFO: kube-apiserver version: v1.21.1 Jun 3 23:47:16.663: INFO: >>> kubeConfig: /root/.kube/config Jun 3 23:47:16.670: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:47:16.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority W0603 23:47:16.700178 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 3 23:47:16.700: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 3 23:47:16.704: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 3 23:47:16.706: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 23:48:16.759: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:48:16.762: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:48:16.784: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:48:16.784: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:48:16.784: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:48:16.784: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:48:16.799: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:48:16.799: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:48:16.799: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:48:16.799: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:48:16.799: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:48:16.799: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:48:16.799: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:48:16.799: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:48:16.799: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:48:16.799: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:48:16.799: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:48:16.799: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:48:16.799: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:48:16.799: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:48:16.799: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:48:16.799: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:48:16.799: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:48:16.799: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:48:16.799: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:48:16.799: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:48:16.799: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:48:16.799: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Jun 3 23:48:24.906: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:48:24.906: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:48:24.906: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:48:24.906: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:48:24.906: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:48:24.906: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.906: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:48:24.906: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.906: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:48:24.906: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.906: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.906: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:48:24.906: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:48:24.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.906: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:48:24.907: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 3 23:48:24.907: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:48:24.907: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:48:24.907: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:48:24.907: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.907: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:48:24.907: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:48:24.907: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.907: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:48:24.907: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.907: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:48:24.907: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:48:24.907: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:48:24.907: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:48:24.907: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:48:24.907: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:48:24.918: INFO: Waiting for running... Jun 3 23:48:24.922: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:48:29.990: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:48:29.990: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:48:29.990: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:48:29.990: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:48:29.990: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:48:29.990: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:48:29.991: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:48:29.991: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:48:29.991: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: fba5d606-30ae-467f-a2dd-f59c4ddd080b-0, Cpu: 37963, Mem: 88885940224 Jun 3 23:48:29.991: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 3 23:48:29.991: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:48:29.991: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:48:29.991: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:48:29.991: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:48:29.991: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:48:29.991: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:48:29.991: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:48:29.991: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:48:29.991: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:48:29.991: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:48:29.991: INFO: Pod for on the node: 9655c346-77fb-4b93-b743-41da6cecb159-0, Cpu: 37613, Mem: 87744079872 Jun 3 23:48:29.991: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 3 23:48:29.991: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:48:46.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5951" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:89.410 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":1,"skipped":113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:48:46.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 3 23:48:46.113: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 23:49:46.172: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:49:46.174: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:49:46.193: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:49:46.193: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:49:46.194: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:49:46.194: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:49:46.209: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:49:46.209: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:49:46.209: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.209: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:49:46.209: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:49:46.209: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:49:46.209: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.209: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:49:46.209: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:49:46.209: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:49:46.209: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:49:46.209: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.209: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:49:46.209: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:49:46.209: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:49:46.209: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:49:46.209: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:49:46.209: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.209: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:49:46.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.209: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:49:46.209: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Jun 3 23:49:46.225: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:49:46.225: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:49:46.225: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.225: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:49:46.226: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:49:46.226: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:49:46.226: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.226: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:49:46.226: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:49:46.226: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:49:46.226: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:49:46.226: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.226: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:49:46.226: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:49:46.226: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:49:46.226: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:49:46.226: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:49:46.226: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:49:46.226: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:49:46.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:49:46.226: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:49:46.226: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 3 23:49:46.239: INFO: Waiting for running... Jun 3 23:49:46.243: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:49:51.311: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:49:51.311: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:49:51.311: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:49:51.311: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:49:51.311: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:49:51.311: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:49:51.311: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:49:51.311: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:49:51.311: INFO: Pod for on the node: 9d423e2a-e12f-485c-b25d-56575833b0c8-0, Cpu: 37613, Mem: 87744079872 Jun 3 23:49:51.311: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 3 23:49:51.311: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:49:51.311: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:49:51.311: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:49:51.311: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:49:51.311: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:49:51.311: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:49:51.311: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:49:51.311: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:49:51.311: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:49:51.311: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:49:51.311: INFO: Pod for on the node: 25fb1d37-be21-4c6d-b9a7-30759af698b3-0, Cpu: 37963, Mem: 88885940224 Jun 3 23:49:51.311: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 3 23:49:51.311: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c98f44de-d36a-46c0-abe7=testing-taint-value-d85c98e7-7ba0-4052-85d2-ce72da21a97e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d0016b9b-66c2-44a3-85c8=testing-taint-value-5d8cb6d4-6877-4092-9e65-cb1908cd18b2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8ab39c9e-c7ef-4ea2-a458=testing-taint-value-ff6001ea-542b-4730-a2e3-ac7f7b5134a5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-87ed0f1e-7054-4de6-b914=testing-taint-value-74a3d4a1-7263-4fa9-a5b4-5e49824f178c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-57c96e37-8da7-41b3-a78d=testing-taint-value-4a5d4376-c526-44d6-af2b-11d3f5eb96a9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c3eb461d-daa1-4f0a-a827=testing-taint-value-2e977253-37bb-4763-8c94-c84d29d88bed:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f7d07a37-be22-4946-b9b0=testing-taint-value-37d713e1-9c46-47ef-b4dc-a161ccd57192:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-00328afa-dd9f-4df0-8319=testing-taint-value-eb8f2973-2f0d-4006-a183-ad2b6445bf01:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8e84418b-eab2-4b92-b94a=testing-taint-value-c93bb180-8613-4b5f-b2b2-eea7e082d36d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1de0480e-62de-453f-9b0a=testing-taint-value-24dd178c-9f24-48d7-af57-ae3c5f9e7e0a:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-55efbe01-e4da-442e-a850=testing-taint-value-e2f1994b-86ab-4511-a399-01d058fe3555:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5c6ba63e-09dd-400d-a3fe=testing-taint-value-2df9e82a-7ae3-401f-ae31-ff0c980ca977:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7d4cca2e-2ab7-4759-870f=testing-taint-value-5d837a7f-0022-4db0-8181-fb5f6903459e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4f164dbb-05d7-46e9-bca6=testing-taint-value-43af7b4a-68f8-44e2-8459-17f5bec9ce8c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-26055724-5bfa-483d-85d6=testing-taint-value-a2655abb-82df-4164-8be6-a8884f473c57:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-55430f8d-b5b1-45bc-a8ab=testing-taint-value-75495bbf-4ec6-4542-b146-a1956ae4c9d7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ba5e0252-1085-480f-a367=testing-taint-value-d02e268d-4d3c-4a86-b230-cfcc66b56db2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c0b773e5-8812-4a30-86b7=testing-taint-value-a3875ba7-04e2-4db0-b3c6-5d5d582c470f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-095df0f8-9eeb-4e86-8220=testing-taint-value-5a9ebc96-aa52-415f-a02c-e39e4ecb4246:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c8d6b869-6a99-47b7-9d50=testing-taint-value-bc2758d2-b1de-4f79-b5be-599eb795a771:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-55efbe01-e4da-442e-a850=testing-taint-value-e2f1994b-86ab-4511-a399-01d058fe3555:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5c6ba63e-09dd-400d-a3fe=testing-taint-value-2df9e82a-7ae3-401f-ae31-ff0c980ca977:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7d4cca2e-2ab7-4759-870f=testing-taint-value-5d837a7f-0022-4db0-8181-fb5f6903459e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4f164dbb-05d7-46e9-bca6=testing-taint-value-43af7b4a-68f8-44e2-8459-17f5bec9ce8c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-26055724-5bfa-483d-85d6=testing-taint-value-a2655abb-82df-4164-8be6-a8884f473c57:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-55430f8d-b5b1-45bc-a8ab=testing-taint-value-75495bbf-4ec6-4542-b146-a1956ae4c9d7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ba5e0252-1085-480f-a367=testing-taint-value-d02e268d-4d3c-4a86-b230-cfcc66b56db2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c0b773e5-8812-4a30-86b7=testing-taint-value-a3875ba7-04e2-4db0-b3c6-5d5d582c470f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-095df0f8-9eeb-4e86-8220=testing-taint-value-5a9ebc96-aa52-415f-a02c-e39e4ecb4246:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c8d6b869-6a99-47b7-9d50=testing-taint-value-bc2758d2-b1de-4f79-b5be-599eb795a771:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c98f44de-d36a-46c0-abe7=testing-taint-value-d85c98e7-7ba0-4052-85d2-ce72da21a97e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d0016b9b-66c2-44a3-85c8=testing-taint-value-5d8cb6d4-6877-4092-9e65-cb1908cd18b2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8ab39c9e-c7ef-4ea2-a458=testing-taint-value-ff6001ea-542b-4730-a2e3-ac7f7b5134a5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-87ed0f1e-7054-4de6-b914=testing-taint-value-74a3d4a1-7263-4fa9-a5b4-5e49824f178c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-57c96e37-8da7-41b3-a78d=testing-taint-value-4a5d4376-c526-44d6-af2b-11d3f5eb96a9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c3eb461d-daa1-4f0a-a827=testing-taint-value-2e977253-37bb-4763-8c94-c84d29d88bed:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f7d07a37-be22-4946-b9b0=testing-taint-value-37d713e1-9c46-47ef-b4dc-a161ccd57192:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-00328afa-dd9f-4df0-8319=testing-taint-value-eb8f2973-2f0d-4006-a183-ad2b6445bf01:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8e84418b-eab2-4b92-b94a=testing-taint-value-c93bb180-8613-4b5f-b2b2-eea7e082d36d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1de0480e-62de-453f-9b0a=testing-taint-value-24dd178c-9f24-48d7-af57-ae3c5f9e7e0a:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:50:10.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5985" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:84.587 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":2,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:50:10.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:50:10.711: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:50:10.719: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:50:10.721: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:50:10.731: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:50:10.731: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:50:10.731: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:50:10.731: INFO: Container discover ready: false, restart count 0 Jun 3 23:50:10.731: INFO: Container init ready: false, restart count 0 Jun 3 23:50:10.731: INFO: Container install ready: false, restart count 0 Jun 3 23:50:10.731: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:50:10.731: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:50:10.731: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:50:10.731: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:50:10.731: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:50:10.731: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:50:10.731: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.731: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:50:10.731: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:50:10.731: INFO: Container collectd ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:50:10.731: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:50:10.731: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:50:10.731: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:50:10.731: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container grafana ready: true, restart count 0 Jun 3 23:50:10.731: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:50:10.731: INFO: with-tolerations from sched-priority-5985 started at 2022-06-03 23:49:51 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.732: INFO: Container with-tolerations ready: true, restart count 0 Jun 3 23:50:10.732: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:50:10.743: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:50:10.743: INFO: Container discover ready: false, restart count 0 Jun 3 23:50:10.743: INFO: Container init ready: false, restart count 0 Jun 3 23:50:10.743: INFO: Container install ready: false, restart count 0 Jun 3 23:50:10.743: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:50:10.743: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:50:10.743: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:50:10.743: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:50:10.743: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:50:10.743: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:50:10.743: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:50:10.743: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:50:10.743: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:50:10.743: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:50:10.743: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:50:10.743: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:50:10.743: INFO: Container collectd ready: true, restart count 0 Jun 3 23:50:10.743: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:50:10.743: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:50:10.743: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:50:10.743: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:50:10.743: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:50:10.743: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:50:10.743: INFO: Container tas-extender ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:50:22.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3304" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.173 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":3,"skipped":1673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:50:22.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 3 23:50:22.886: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 23:51:22.942: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:51:22.945: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:51:22.964: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:51:22.964: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:51:22.964: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:51:22.964: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:51:22.980: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:51:22.980: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:51:22.980: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.980: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.980: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:51:22.980: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:51:22.980: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.980: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:51:22.980: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.980: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.980: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.980: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:51:22.981: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:51:22.981: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:51:22.981: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:51:22.981: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:51:22.981: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.981: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:51:22.981: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:51:22.981: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:51:22.981: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.981: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:51:22.981: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.981: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:51:22.981: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.981: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.981: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.981: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:51:22.981: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.981: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:51:22.981: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Jun 3 23:51:22.998: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:51:22.998: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:51:22.998: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.998: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.998: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:51:22.998: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:51:22.998: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.998: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:51:22.998: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.998: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.998: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.998: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:51:22.999: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:51:22.999: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:51:22.999: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:51:22.999: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:51:22.999: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.999: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:51:22.999: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:51:22.999: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:51:22.999: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.999: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:51:22.999: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.999: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:51:22.999: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.999: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.999: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:51:22.999: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:51:22.999: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:51:22.999: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:51:22.999: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 3 23:51:23.013: INFO: Waiting for running... Jun 3 23:51:23.016: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:51:28.087: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:51:28.087: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:51:28.087: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:51:28.087: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:51:28.087: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:51:28.087: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:51:28.087: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:51:28.087: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:51:28.087: INFO: Pod for on the node: 4787ff7f-5c45-4c77-b144-c40951a52f3a-0, Cpu: 37613, Mem: 87744079872 Jun 3 23:51:28.087: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 3 23:51:28.087: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:51:28.087: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:51:28.087: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:51:28.087: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:51:28.087: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:51:28.087: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:51:28.087: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:51:28.087: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:51:28.087: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:51:28.087: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:51:28.087: INFO: Pod for on the node: 590cef25-5f94-4d1e-bc7a-0e52c317d8b6-0, Cpu: 37963, Mem: 88885940224 Jun 3 23:51:28.087: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 3 23:51:28.087: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7590 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-7590, will wait for the garbage collector to delete the pods Jun 3 23:51:34.274: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.87221ms Jun 3 23:51:34.375: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 101.135277ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:51:44.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7590" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:81.438 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":4,"skipped":1894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:51:44.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 3 23:51:44.330: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 23:52:44.397: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:52:44.399: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 3 23:52:44.419: INFO: The status of Pod cmk-init-discover-node1-n75dv is Succeeded, skipping waiting Jun 3 23:52:44.419: INFO: The status of Pod cmk-init-discover-node2-xvf8p is Succeeded, skipping waiting Jun 3 23:52:44.419: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 3 23:52:44.419: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 3 23:52:44.436: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:52:44.436: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:52:44.436: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:52:44.436: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:52:44.436: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:52:44.436: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:52:44.436: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:52:44.436: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:52:44.436: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:52:44.436: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:52:44.436: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:52:44.436: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:52:44.436: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:52:44.436: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:52:44.436: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:52:44.436: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:52:44.436: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:52:44.436: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:52:44.436: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:52:44.436: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:52:44.436: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:52:44.436: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Jun 3 23:52:48.481: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:52:48.481: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:52:48.481: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:52:48.481: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:52:48.481: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:52:48.481: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:52:48.481: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:52:48.481: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:52:48.481: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 3 23:52:48.481: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 3 23:52:48.481: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:52:48.481: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:52:48.481: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:52:48.481: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:52:48.481: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:52:48.481: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:52:48.481: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:52:48.481: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:52:48.481: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 3 23:52:48.481: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 3 23:52:48.481: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 3 23:52:48.493: INFO: Waiting for running... Jun 3 23:52:48.497: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:52:53.567: INFO: ComputeCPUMemFraction for node: node1 Jun 3 23:52:53.567: INFO: Pod for on the node: cmk-84nbw, Cpu: 200, Mem: 419430400 Jun 3 23:52:53.567: INFO: Pod for on the node: cmk-init-discover-node1-n75dv, Cpu: 300, Mem: 629145600 Jun 3 23:52:53.567: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-c927x, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.567: INFO: Pod for on the node: kube-flannel-hm6bh, Cpu: 150, Mem: 64000000 Jun 3 23:52:53.567: INFO: Pod for on the node: kube-multus-ds-amd64-p7r6j, Cpu: 100, Mem: 94371840 Jun 3 23:52:53.567: INFO: Pod for on the node: kube-proxy-b6zlv, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.567: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 3 23:52:53.567: INFO: Pod for on the node: node-feature-discovery-worker-rg6tx, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.567: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.567: INFO: Pod for on the node: collectd-nbx5z, Cpu: 300, Mem: 629145600 Jun 3 23:52:53.567: INFO: Pod for on the node: node-exporter-f5xkq, Cpu: 112, Mem: 209715200 Jun 3 23:52:53.567: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 3 23:52:53.567: INFO: Pod for on the node: cc7e529d-f80f-4d37-bd11-904666097d53-0, Cpu: 45313, Mem: 105632540672 Jun 3 23:52:53.567: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Jun 3 23:52:53.567: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 3 23:52:53.567: INFO: ComputeCPUMemFraction for node: node2 Jun 3 23:52:53.567: INFO: Pod for on the node: cmk-init-discover-node2-xvf8p, Cpu: 300, Mem: 629145600 Jun 3 23:52:53.567: INFO: Pod for on the node: cmk-v446x, Cpu: 200, Mem: 419430400 Jun 3 23:52:53.567: INFO: Pod for on the node: kube-flannel-pc7wj, Cpu: 150, Mem: 64000000 Jun 3 23:52:53.567: INFO: Pod for on the node: kube-multus-ds-amd64-n7spl, Cpu: 100, Mem: 94371840 Jun 3 23:52:53.567: INFO: Pod for on the node: kube-proxy-qmkcq, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.567: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-25c95, Cpu: 50, Mem: 64000000 Jun 3 23:52:53.568: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-fz4kn, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.568: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 3 23:52:53.568: INFO: Pod for on the node: node-feature-discovery-worker-gn855, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.568: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.568: INFO: Pod for on the node: collectd-q2l4t, Cpu: 300, Mem: 629145600 Jun 3 23:52:53.568: INFO: Pod for on the node: node-exporter-g45bm, Cpu: 112, Mem: 209715200 Jun 3 23:52:53.568: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.568: INFO: Pod for on the node: 645a0961-1d80-4716-99bc-050d02077724-0, Cpu: 45663, Mem: 106774400614 Jun 3 23:52:53.568: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 3 23:52:53.568: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Jun 3 23:52:53.568: INFO: Node: node2, totalRequestedMemResource: 107343345254, memAllocatableVal: 178884603904, memFraction: 0.6000703409422913 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:53:03.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7897" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:79.317 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":5,"skipped":1934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:53:03.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:53:03.650: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:53:03.659: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:53:03.661: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:53:03.669: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:53:03.669: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:53:03.669: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:53:03.669: INFO: Container discover ready: false, restart count 0 Jun 3 23:53:03.669: INFO: Container init ready: false, restart count 0 Jun 3 23:53:03.669: INFO: Container install ready: false, restart count 0 Jun 3 23:53:03.669: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:53:03.669: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:53:03.669: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:53:03.669: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:53:03.669: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:53:03.669: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:53:03.669: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:53:03.669: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:53:03.669: INFO: Container collectd ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:53:03.669: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:53:03.669: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:53:03.669: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:53:03.669: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container grafana ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:53:03.669: INFO: pod-with-pod-antiaffinity from sched-priority-7897 started at 2022-06-03 23:52:53 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.669: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Jun 3 23:53:03.669: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:53:03.687: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:53:03.688: INFO: Container discover ready: false, restart count 0 Jun 3 23:53:03.688: INFO: Container init ready: false, restart count 0 Jun 3 23:53:03.688: INFO: Container install ready: false, restart count 0 Jun 3 23:53:03.688: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:53:03.688: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:53:03.688: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:53:03.688: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:53:03.688: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:53:03.688: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:53:03.688: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:53:03.688: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:53:03.688: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:53:03.688: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:53:03.688: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:53:03.688: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:53:03.688: INFO: Container collectd ready: true, restart count 0 Jun 3 23:53:03.688: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:53:03.688: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:53:03.688: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:53:03.688: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:53:03.688: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:53:03.688: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container tas-extender ready: true, restart count 0 Jun 3 23:53:03.688: INFO: pod-with-label-security-s1 from sched-priority-7897 started at 2022-06-03 23:52:44 +0000 UTC (1 container statuses recorded) Jun 3 23:53:03.688: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Jun 3 23:53:03.730: INFO: Pod cmk-84nbw requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod cmk-v446x requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod cmk-webhook-6c9d5f8578-c927x requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod kube-flannel-hm6bh requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod kube-flannel-pc7wj requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod kube-multus-ds-amd64-n7spl requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod kube-multus-ds-amd64-p7r6j requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod kube-proxy-b6zlv requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod kube-proxy-qmkcq requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod kubernetes-dashboard-785dcbb76d-25c95 requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod kubernetes-metrics-scraper-5558854cb-fz4kn requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod node-feature-discovery-worker-gn855 requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod node-feature-discovery-worker-rg6tx requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod collectd-nbx5z requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod collectd-q2l4t requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod node-exporter-f5xkq requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod node-exporter-g45bm requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod pod-with-label-security-s1 requesting local ephemeral resource =0 on Node node2 Jun 3 23:53:03.730: INFO: Pod pod-with-pod-antiaffinity requesting local ephemeral resource =0 on Node node1 Jun 3 23:53:03.730: INFO: Using pod capacity: 40608090249 Jun 3 23:53:03.730: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 Jun 3 23:53:03.730: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Jun 3 23:53:03.923: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f5416f2539636f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f5416f9886f3c3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f5416fab06ad44], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 310.352742ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f5416fcbd22462], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f54170660254d4], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f5416f25b92b92], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f5417086b83e4b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f5417097029179], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 273.300574ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f54170a5c076b8], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f54171037e2f2f], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f5416f2a9d5c23], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f541700cf0c12d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f541701e68af8a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 293.066017ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f541703e67a9fc], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f5417081d57f9d], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f5416f2b402337], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f541713866660c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f541715c5a8709], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 603.192987ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f5417162c1d622], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f54171698ced99], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f5416f2bd5ae84], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f541704e11634c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f541706609fff2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 402.158637ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f5417086431a57], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f54170c086cfc3], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f5416f2c4f7981], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f5417123138c5c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f54171339658be], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 276.999117ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f541714b0f61b9], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f541715308f507], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f5416f2cd65382], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f5417100154760], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f541712d61cf62], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 759.981416ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f5417153dd99ac], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f5417160931e8d], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f5416f2d4e510d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-15 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f541713972df30], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f541716df1a2c3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 880.716845ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f5417174661fd9], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f541717b54ef7f], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f5416f2dfb62c0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f54171018a7817], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f541713e8dad0c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.023614262s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f541715989f121], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f5417165308a96], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f5416f2e880445], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f54170ff76f02a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f541711d5e6397], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 501.698394ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f541712f47656a], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f5417159854f54], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f5416f2f1e0b96], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f541714f75831c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f54171738f0973], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 605.645313ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f541717a473508], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f5417181eb680d], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f5416f2f955117], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f541714f74368b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f5417160cec121], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 291.138362ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f54171687696a6], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f541716f46938b], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f5416f264eb6e4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f5416f9b03c6d5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f5416fb6804277], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 461.136099ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f5416feb8870e2], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f54170280c9d8e], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f5416f26e40445], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f541713d35ee90], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f541718079052a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.128463532s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f54171870d2ef4], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f541718d918e9a], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f5416f276c75d0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f54170df4c6a38], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f54170f150ae81], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 302.262851ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f541710e4a216a], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f5417152f5767a], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f5416f27f52f00], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f5417078c3a98b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f541708b64cae5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 312.539558ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f54170c51526ae], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f541713e713280], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f5416f287fddac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-6 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f54170727254f5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f5417083a6241b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 288.599856ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f54170954267ce], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f5417106b0c64e], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f5416f28f7f14a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f541707934cc8c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f541709d26ee40], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 603.065467ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f54170c09b1a8a], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f5417139dd8b0d], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f5416f298cdd95], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-8 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f5416ff17b2c1a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f5417005acb98d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 338.78327ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f54170231fd037], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f5417062a9a76e], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f5416f2a25006b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5338/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f541713835d35f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f541714b4415ba], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 319.696876ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f5417153998292], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f541715a9c872a], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f54172b1d2f189], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:53:20.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5338" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.400 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":6,"skipped":1968,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:53:20.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 3 23:53:20.056: INFO: Waiting up to 1m0s for all nodes to be ready Jun 3 23:54:20.127: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:54:54.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8471" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:94.447 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":7,"skipped":2083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:54:54.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:54:54.515: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:54:54.523: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:54:54.526: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:54:54.535: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:54:54.535: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:54:54.535: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:54:54.535: INFO: Container discover ready: false, restart count 0 Jun 3 23:54:54.535: INFO: Container init ready: false, restart count 0 Jun 3 23:54:54.535: INFO: Container install ready: false, restart count 0 Jun 3 23:54:54.535: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:54:54.535: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:54:54.535: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:54:54.535: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:54:54.535: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:54:54.535: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:54:54.535: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:54:54.535: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:54:54.535: INFO: Container collectd ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:54:54.535: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:54:54.535: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:54:54.535: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:54:54.535: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container grafana ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:54:54.535: INFO: low-1 from sched-preemption-8471 started at 2022-06-03 23:54:36 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container low-1 ready: true, restart count 0 Jun 3 23:54:54.535: INFO: medium from sched-preemption-8471 started at 2022-06-03 23:54:51 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.535: INFO: Container medium ready: true, restart count 0 Jun 3 23:54:54.535: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:54:54.546: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:54:54.546: INFO: Container discover ready: false, restart count 0 Jun 3 23:54:54.546: INFO: Container init ready: false, restart count 0 Jun 3 23:54:54.547: INFO: Container install ready: false, restart count 0 Jun 3 23:54:54.547: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:54:54.547: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:54:54.547: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:54:54.547: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:54:54.547: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:54:54.547: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:54:54.547: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:54:54.547: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:54:54.547: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:54:54.547: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:54:54.547: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:54:54.547: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:54:54.547: INFO: Container collectd ready: true, restart count 0 Jun 3 23:54:54.547: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:54:54.547: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:54:54.547: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:54:54.547: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:54:54.547: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:54:54.547: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container tas-extender ready: true, restart count 0 Jun 3 23:54:54.547: INFO: high from sched-preemption-8471 started at 2022-06-03 23:54:32 +0000 UTC (1 container statuses recorded) Jun 3 23:54:54.547: INFO: Container high ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f54189e4e73b71], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Warning], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f5418a56287c00], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f5418ace5e475c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8168/filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f5418b256d8c27], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f5418b368e3dfc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 287.348857ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f5418b3cc72640], Reason = [Created], Message = [Created container filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5] STEP: Considering event: Type = [Normal], Name = [filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5.16f5418b439cd0ee], Reason = [Started], Message = [Started container filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5] STEP: Considering event: Type = [Normal], Name = [without-label.16f54188f35b87b0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8168/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16f5418948296d58], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16f5418958caacdb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 278.997565ms] STEP: Considering event: Type = [Normal], Name = [without-label.16f541895f416d70], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f5418965988a8d], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f54189e3c6785d], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-poda0184f2b-2cf2-415b-aa04-30d924cb0e78.16f5418b4bc6923d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:55:05.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8168" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.196 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":8,"skipped":3482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:55:05.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:55:05.711: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:55:05.719: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:55:05.722: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:55:05.731: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:05.731: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:05.731: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:05.731: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:55:05.731: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:05.731: INFO: Container init ready: false, restart count 0 Jun 3 23:55:05.731: INFO: Container install ready: false, restart count 0 Jun 3 23:55:05.731: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.731: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:55:05.731: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.731: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:55:05.731: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.731: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:05.732: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.732: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:55:05.732: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.732: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:05.732: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.732: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:05.732: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.732: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:05.732: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:05.732: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:05.732: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:05.732: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:05.732: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:05.732: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:05.732: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:05.732: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:55:05.732: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:55:05.732: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:55:05.732: INFO: Container grafana ready: true, restart count 0 Jun 3 23:55:05.732: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:55:05.732: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:55:05.739: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:55:05.739: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:05.739: INFO: Container init ready: false, restart count 0 Jun 3 23:55:05.739: INFO: Container install ready: false, restart count 0 Jun 3 23:55:05.739: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:05.739: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:05.739: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:05.739: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:55:05.739: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:05.739: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:55:05.739: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:55:05.739: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:55:05.739: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:05.739: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:05.739: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:05.739: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:05.739: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:05.739: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:05.739: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:05.739: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:05.739: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:05.739: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:05.739: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.739: INFO: Container tas-extender ready: true, restart count 0 Jun 3 23:55:05.740: INFO: filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5 from sched-pred-8168 started at 2022-06-03 23:55:02 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.740: INFO: Container filler-pod-af457a7d-a48d-4eeb-af47-fac6a3a2b5d5 ready: true, restart count 0 Jun 3 23:55:05.740: INFO: high from sched-preemption-8471 started at 2022-06-03 23:54:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:05.740: INFO: Container high ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f5418cf8429e2c], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match Pod's node affinity/selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:55:12.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-922" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.174 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":9,"skipped":3599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:55:12.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:55:12.893: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:55:12.901: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:55:12.903: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:55:12.913: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:12.914: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:12.914: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:55:12.914: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:12.914: INFO: Container init ready: false, restart count 0 Jun 3 23:55:12.914: INFO: Container install ready: false, restart count 0 Jun 3 23:55:12.914: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:55:12.914: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:55:12.914: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:12.914: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:55:12.914: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:12.914: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:12.914: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.914: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:12.914: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:12.914: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:12.914: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:12.914: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:12.914: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:55:12.914: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container grafana ready: true, restart count 0 Jun 3 23:55:12.914: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:55:12.914: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:55:12.923: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:55:12.923: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:12.923: INFO: Container init ready: false, restart count 0 Jun 3 23:55:12.923: INFO: Container install ready: false, restart count 0 Jun 3 23:55:12.923: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:12.923: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:12.923: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:12.923: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:55:12.923: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:12.923: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:55:12.923: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:55:12.923: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:55:12.923: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:12.923: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:12.923: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:12.923: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:12.923: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:12.923: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:12.923: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:12.923: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:12.923: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:12.923: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:12.923: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:55:12.923: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b47984c3-2e11-4c5d-a8dd-58fffd6b4a21=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-67b9e7ae-4ed1-4590-8880-05321f32fdc1 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418d399b4f7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8030/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418d8f12c8b3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418d9e1a1e05], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 252.13301ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418da4fba365], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418dab81a25f], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418e2ab416a2], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f5418e2b8f35d7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-b47984c3-2e11-4c5d-a8dd-58fffd6b4a21: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f5418e2b8f35d7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-b47984c3-2e11-4c5d-a8dd-58fffd6b4a21: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match Pod's node affinity/selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418d399b4f7d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8030/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418d8f12c8b3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418d9e1a1e05], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 252.13301ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418da4fba365], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418dab81a25f], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f5418e2ab416a2], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b47984c3-2e11-4c5d-a8dd-58fffd6b4a21=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16f5418e8831aa11], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8030/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-67b9e7ae-4ed1-4590-8880-05321f32fdc1 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-67b9e7ae-4ed1-4590-8880-05321f32fdc1 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b47984c3-2e11-4c5d-a8dd-58fffd6b4a21=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:55:19.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8030" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.186 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":10,"skipped":4013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:55:19.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:55:19.080: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:55:19.088: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:55:19.090: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:55:19.100: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:19.100: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:19.100: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:55:19.100: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:19.100: INFO: Container init ready: false, restart count 0 Jun 3 23:55:19.100: INFO: Container install ready: false, restart count 0 Jun 3 23:55:19.100: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:55:19.100: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:55:19.100: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:19.100: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:55:19.100: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:19.100: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:19.100: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.100: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:19.100: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:19.100: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:19.100: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:19.100: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:19.100: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:55:19.100: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container grafana ready: true, restart count 0 Jun 3 23:55:19.100: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:55:19.100: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:55:19.109: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:55:19.109: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:19.109: INFO: Container init ready: false, restart count 0 Jun 3 23:55:19.109: INFO: Container install ready: false, restart count 0 Jun 3 23:55:19.109: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:19.109: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:19.109: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:19.109: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:55:19.109: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:19.109: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:55:19.109: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:55:19.109: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:55:19.109: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:19.109: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:19.109: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:19.109: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:19.109: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:19.109: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:19.109: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:19.109: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:19.109: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:19.110: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:19.110: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.110: INFO: Container tas-extender ready: true, restart count 0 Jun 3 23:55:19.110: INFO: still-no-tolerations from sched-pred-8030 started at 2022-06-03 23:55:18 +0000 UTC (1 container statuses recorded) Jun 3 23:55:19.110: INFO: Container still-no-tolerations ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f88fb8ed-949a-485c-a6c8-9995c29b9f37 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f88fb8ed-949a-485c-a6c8-9995c29b9f37 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f88fb8ed-949a-485c-a6c8-9995c29b9f37 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:55:35.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4899" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":11,"skipped":4128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:55:35.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:55:35.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:55:35.287: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:55:35.289: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:55:35.298: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:35.298: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:35.298: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:35.298: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:55:35.299: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:35.299: INFO: Container init ready: false, restart count 0 Jun 3 23:55:35.299: INFO: Container install ready: false, restart count 0 Jun 3 23:55:35.299: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:55:35.299: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:55:35.299: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:35.299: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:55:35.299: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:35.299: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:35.299: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:35.299: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:35.299: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:35.299: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:35.299: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:35.299: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:55:35.299: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Container grafana ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:55:35.299: INFO: pod1 from sched-pred-4899 started at 2022-06-03 23:55:23 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container agnhost ready: true, restart count 0 Jun 3 23:55:35.299: INFO: pod2 from sched-pred-4899 started at 2022-06-03 23:55:27 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container agnhost ready: true, restart count 0 Jun 3 23:55:35.299: INFO: pod3 from sched-pred-4899 started at 2022-06-03 23:55:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.299: INFO: Container agnhost ready: true, restart count 0 Jun 3 23:55:35.299: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:55:35.310: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:55:35.310: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:35.310: INFO: Container init ready: false, restart count 0 Jun 3 23:55:35.310: INFO: Container install ready: false, restart count 0 Jun 3 23:55:35.310: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:35.310: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:35.310: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:35.310: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:55:35.310: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:35.310: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:55:35.310: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:55:35.310: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:55:35.310: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:35.310: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:35.310: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:35.310: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:35.310: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:35.310: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:35.310: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:35.310: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:35.310: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:35.310: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:35.310: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:55:35.310: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-811b1837-5e99-498e-8536-7a1c84a2c465=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-001f733c-2238-40c3-b2b2-3b89ae402da4 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-001f733c-2238-40c3-b2b2-3b89ae402da4 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-001f733c-2238-40c3-b2b2-3b89ae402da4 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-811b1837-5e99-498e-8536-7a1c84a2c465=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:55:43.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3854" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.188 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":12,"skipped":5322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 3 23:55:43.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 3 23:55:43.463: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 3 23:55:43.471: INFO: Waiting for terminating namespaces to be deleted... Jun 3 23:55:43.474: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 3 23:55:43.483: INFO: cmk-84nbw from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:43.483: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:43.483: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:43.483: INFO: cmk-init-discover-node1-n75dv from kube-system started at 2022-06-03 20:11:42 +0000 UTC (3 container statuses recorded) Jun 3 23:55:43.483: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:43.483: INFO: Container init ready: false, restart count 0 Jun 3 23:55:43.483: INFO: Container install ready: false, restart count 0 Jun 3 23:55:43.483: INFO: cmk-webhook-6c9d5f8578-c927x from kube-system started at 2022-06-03 20:12:25 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container cmk-webhook ready: true, restart count 0 Jun 3 23:55:43.483: INFO: kube-flannel-hm6bh from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container kube-flannel ready: true, restart count 3 Jun 3 23:55:43.483: INFO: kube-multus-ds-amd64-p7r6j from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:43.483: INFO: kube-proxy-b6zlv from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container kube-proxy ready: true, restart count 2 Jun 3 23:55:43.483: INFO: nginx-proxy-node1 from kube-system started at 2022-06-03 19:59:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:43.483: INFO: node-feature-discovery-worker-rg6tx from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:43.483: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-qwqjx from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.483: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:43.483: INFO: collectd-nbx5z from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:43.483: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:43.483: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:43.483: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:43.483: INFO: node-exporter-f5xkq from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:43.483: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:43.483: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:43.484: INFO: prometheus-k8s-0 from monitoring started at 2022-06-03 20:13:45 +0000 UTC (4 container statuses recorded) Jun 3 23:55:43.484: INFO: Container config-reloader ready: true, restart count 0 Jun 3 23:55:43.484: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 3 23:55:43.484: INFO: Container grafana ready: true, restart count 0 Jun 3 23:55:43.484: INFO: Container prometheus ready: true, restart count 1 Jun 3 23:55:43.484: INFO: pod1 from sched-pred-4899 started at 2022-06-03 23:55:23 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.484: INFO: Container agnhost ready: false, restart count 0 Jun 3 23:55:43.484: INFO: pod2 from sched-pred-4899 started at 2022-06-03 23:55:27 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.484: INFO: Container agnhost ready: true, restart count 0 Jun 3 23:55:43.484: INFO: pod3 from sched-pred-4899 started at 2022-06-03 23:55:31 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.484: INFO: Container agnhost ready: true, restart count 0 Jun 3 23:55:43.484: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 3 23:55:43.494: INFO: cmk-init-discover-node2-xvf8p from kube-system started at 2022-06-03 20:12:02 +0000 UTC (3 container statuses recorded) Jun 3 23:55:43.494: INFO: Container discover ready: false, restart count 0 Jun 3 23:55:43.494: INFO: Container init ready: false, restart count 0 Jun 3 23:55:43.494: INFO: Container install ready: false, restart count 0 Jun 3 23:55:43.494: INFO: cmk-v446x from kube-system started at 2022-06-03 20:12:24 +0000 UTC (2 container statuses recorded) Jun 3 23:55:43.494: INFO: Container nodereport ready: true, restart count 0 Jun 3 23:55:43.494: INFO: Container reconcile ready: true, restart count 0 Jun 3 23:55:43.494: INFO: kube-flannel-pc7wj from kube-system started at 2022-06-03 20:00:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container kube-flannel ready: true, restart count 1 Jun 3 23:55:43.494: INFO: kube-multus-ds-amd64-n7spl from kube-system started at 2022-06-03 20:00:40 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container kube-multus ready: true, restart count 1 Jun 3 23:55:43.494: INFO: kube-proxy-qmkcq from kube-system started at 2022-06-03 19:59:36 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container kube-proxy ready: true, restart count 1 Jun 3 23:55:43.494: INFO: kubernetes-dashboard-785dcbb76d-25c95 from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 3 23:55:43.494: INFO: kubernetes-metrics-scraper-5558854cb-fz4kn from kube-system started at 2022-06-03 20:01:12 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 3 23:55:43.494: INFO: nginx-proxy-node2 from kube-system started at 2022-06-03 19:59:32 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container nginx-proxy ready: true, restart count 2 Jun 3 23:55:43.494: INFO: node-feature-discovery-worker-gn855 from kube-system started at 2022-06-03 20:08:09 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.494: INFO: Container nfd-worker ready: true, restart count 0 Jun 3 23:55:43.495: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-49xzt from kube-system started at 2022-06-03 20:09:20 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.495: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 3 23:55:43.495: INFO: collectd-q2l4t from monitoring started at 2022-06-03 20:17:32 +0000 UTC (3 container statuses recorded) Jun 3 23:55:43.495: INFO: Container collectd ready: true, restart count 0 Jun 3 23:55:43.495: INFO: Container collectd-exporter ready: true, restart count 0 Jun 3 23:55:43.495: INFO: Container rbac-proxy ready: true, restart count 0 Jun 3 23:55:43.495: INFO: node-exporter-g45bm from monitoring started at 2022-06-03 20:13:28 +0000 UTC (2 container statuses recorded) Jun 3 23:55:43.495: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 3 23:55:43.495: INFO: Container node-exporter ready: true, restart count 0 Jun 3 23:55:43.495: INFO: tas-telemetry-aware-scheduling-84ff454dfb-j2kg5 from monitoring started at 2022-06-03 20:16:39 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.495: INFO: Container tas-extender ready: true, restart count 0 Jun 3 23:55:43.495: INFO: with-tolerations from sched-pred-3854 started at 2022-06-03 23:55:39 +0000 UTC (1 container statuses recorded) Jun 3 23:55:43.495: INFO: Container with-tolerations ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-12dd7c93-f9e7-4ee6-ae30-45bb182ad15a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-12dd7c93-f9e7-4ee6-ae30-45bb182ad15a off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-12dd7c93-f9e7-4ee6-ae30-45bb182ad15a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 3 23:55:55.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7181" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.136 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":13,"skipped":5579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 3 23:55:55.580: INFO: Running AfterSuite actions on all nodes Jun 3 23:55:55.580: INFO: Running AfterSuite actions on node 1 Jun 3 23:55:55.580: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 519.051 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 8m40.458442096s Test Suite Passed