I0422 23:49:23.353320 22 e2e.go:129] Starting e2e run "3c922cc4-8561-437f-b212-6176603f8d90" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1650671362 - Will randomize all specs Will run 13 of 5773 specs Apr 22 23:49:23.368: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:49:23.373: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 23:49:23.403: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:49:23.468: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:49:23.468: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:49:23.468: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:49:23.468: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:49:23.468: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 23:49:23.486: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 22 23:49:23.486: INFO: e2e test version: v1.21.9 Apr 22 23:49:23.487: INFO: kube-apiserver version: v1.21.1 Apr 22 23:49:23.487: INFO: >>> kubeConfig: /root/.kube/config Apr 22 23:49:23.493: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:49:23.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority W0422 23:49:23.526484 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 23:49:23.526: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 22 23:49:23.530: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 22 23:49:23.532: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 23:50:23.590: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:50:23.594: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:50:23.615: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:50:23.615: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:50:23.615: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:50:23.615: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:50:23.633: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:50:23.633: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:50:23.633: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:50:23.633: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:50:23.633: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:50:23.633: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:50:23.633: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:50:23.633: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:50:23.633: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:50:23.633: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:50:23.633: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:50:23.633: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:50:23.633: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:50:23.633: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:50:23.633: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:50:23.633: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:50:23.633: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:50:23.633: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.633: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:50:23.634: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:50:23.634: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:50:23.634: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:50:23.634: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Apr 22 23:50:27.676: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:50:27.676: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:50:27.676: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:50:27.676: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:50:27.676: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:50:27.676: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.676: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.676: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:50:27.676: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.676: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.676: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:50:27.676: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:50:27.676: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:50:27.676: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.677: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:50:27.677: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:50:27.677: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:50:27.677: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:50:27.677: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:50:27.677: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.677: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:50:27.677: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:50:27.677: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.677: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:50:27.677: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:50:27.677: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.677: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.677: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:50:27.677: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:50:27.677: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 22 23:50:27.677: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:50:27.677: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Apr 22 23:50:27.687: INFO: Waiting for running... Apr 22 23:50:27.692: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:50:32.763: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:50:32.763: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:50:32.763: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:50:32.763: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:50:32.763: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:50:32.763: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:50:32.763: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:50:32.763: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:50:32.763: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: 6e7d7d1c-def0-4d1f-9b30-854499ebeb53-0, Cpu: 45313, Mem: 105632540672 Apr 22 23:50:32.763: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Apr 22 23:50:32.763: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:50:32.763: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:50:32.763: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:50:32.763: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:50:32.763: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:50:32.763: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:50:32.763: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:50:32.763: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:50:32.763: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:50:32.763: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:50:32.763: INFO: Pod for on the node: 4c3a5c58-c795-48f9-8ae1-29259a778a5f-0, Cpu: 45663, Mem: 106774400614 Apr 22 23:50:32.763: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Apr 22 23:50:32.763: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Apr 22 23:50:32.763: INFO: Node: node2, totalRequestedMemResource: 107343345254, memAllocatableVal: 178884603904, memFraction: 0.6000703409422913 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:50:48.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3557" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:85.324 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":1,"skipped":162,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:50:48.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:50:48.841: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:50:48.850: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:50:48.861: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:50:48.877: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:50:48.877: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:50:48.877: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:50:48.877: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:50:48.877: INFO: Container discover ready: false, restart count 0 Apr 22 23:50:48.877: INFO: Container init ready: false, restart count 0 Apr 22 23:50:48.877: INFO: Container install ready: false, restart count 0 Apr 22 23:50:48.877: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.877: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:50:48.877: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.877: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:50:48.877: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.877: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:50:48.877: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.877: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:50:48.877: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.877: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:50:48.877: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.877: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:50:48.878: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.878: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:50:48.878: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:50:48.878: INFO: Container collectd ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:50:48.878: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:50:48.878: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:50:48.878: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:50:48.878: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Container grafana ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:50:48.878: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.878: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:50:48.878: INFO: pod-with-pod-antiaffinity from sched-priority-3557 started at 2022-04-22 23:50:32 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.878: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Apr 22 23:50:48.878: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:50:48.889: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:50:48.889: INFO: Container discover ready: false, restart count 0 Apr 22 23:50:48.889: INFO: Container init ready: false, restart count 0 Apr 22 23:50:48.889: INFO: Container install ready: false, restart count 0 Apr 22 23:50:48.889: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:50:48.889: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:50:48.889: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:50:48.889: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:50:48.889: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:50:48.889: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:50:48.889: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:50:48.889: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:50:48.889: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:50:48.889: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:50:48.889: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:50:48.889: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:50:48.889: INFO: Container collectd ready: true, restart count 0 Apr 22 23:50:48.889: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:50:48.889: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:50:48.889: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:50:48.889: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:50:48.889: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:50:48.889: INFO: pod-with-label-security-s1 from sched-priority-3557 started at 2022-04-22 23:50:23 +0000 UTC (1 container statuses recorded) Apr 22 23:50:48.889: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b00a6f62-ee74-4345-b4f7-01da5de340b5=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-5c13d0d7-02e3-43c2-9b7d-eb534e741dbe testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cefe87c2dae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8059/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf03d85b191], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf04fa4c386], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 304.018824ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf0562653a6], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf05ce91237], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf0d833af01], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16e85cf0d9f70edc], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-b00a6f62-ee74-4345-b4f7-01da5de340b5: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16e85cf0d9f70edc], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-b00a6f62-ee74-4345-b4f7-01da5de340b5: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cefe87c2dae], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8059/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf03d85b191], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf04fa4c386], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 304.018824ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf0562653a6], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf05ce91237], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16e85cf0d833af01], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b00a6f62-ee74-4345-b4f7-01da5de340b5=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16e85cf11a1083b4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8059/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-5c13d0d7-02e3-43c2-9b7d-eb534e741dbe off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-5c13d0d7-02e3-43c2-9b7d-eb534e741dbe STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b00a6f62-ee74-4345-b4f7-01da5de340b5=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:50:55.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8059" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.191 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":2,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:50:55.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 22 23:50:55.032: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 23:51:55.089: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:51:55.091: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:51:55.117: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:51:55.117: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:51:55.117: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:51:55.117: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:51:55.133: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:51:55.133: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:51:55.133: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:51:55.133: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:51:55.133: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:51:55.133: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.133: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.133: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:51:55.133: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.133: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.133: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:51:55.133: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:51:55.134: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:51:55.134: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.134: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:51:55.134: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:51:55.134: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:51:55.134: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:51:55.134: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:51:55.134: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.134: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:51:55.134: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:51:55.134: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.134: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:51:55.134: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:51:55.134: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.134: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:51:55.134: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:51:55.134: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:51:55.134: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:51:55.134: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Apr 22 23:52:03.224: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:52:03.224: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:52:03.224: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:52:03.224: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:52:03.224: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:52:03.224: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:52:03.224: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:52:03.224: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:52:03.224: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:52:03.224: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:52:03.224: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Apr 22 23:52:03.224: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:52:03.224: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:52:03.224: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:52:03.224: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:52:03.224: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:52:03.224: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:52:03.224: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:52:03.224: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:52:03.224: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:52:03.224: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:52:03.224: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:52:03.224: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:52:03.235: INFO: Waiting for running... Apr 22 23:52:03.238: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:52:08.309: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:52:08.309: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:52:08.309: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:52:08.309: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.309: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:52:08.309: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:52:08.309: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.309: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:52:08.309: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:52:08.309: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:52:08.310: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: edfe55b1-7fe3-43e2-8e4d-5550019609ae-0, Cpu: 37963, Mem: 88885940224 Apr 22 23:52:08.310: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 22 23:52:08.310: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:52:08.310: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:52:08.310: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:52:08.310: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:52:08.310: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:52:08.310: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:52:08.310: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:52:08.310: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:52:08.310: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:52:08.310: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:52:08.310: INFO: Pod for on the node: bdac5133-bba1-457c-89b4-86d9b8d3b023-0, Cpu: 37613, Mem: 87744079872 Apr 22 23:52:08.310: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 22 23:52:08.310: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:52:28.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4256" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:93.395 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":3,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:52:28.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Apr 22 23:52:28.447: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 23:53:28.507: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:54:08.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1196" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:100.409 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":4,"skipped":619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:54:08.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:54:08.847: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:54:08.859: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:54:08.866: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:54:08.873: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:54:08.873: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:08.874: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:54:08.874: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:08.874: INFO: Container init ready: false, restart count 0 Apr 22 23:54:08.874: INFO: Container install ready: false, restart count 0 Apr 22 23:54:08.874: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:54:08.874: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:08.874: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:08.874: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:54:08.874: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:54:08.874: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:08.874: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:08.874: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:08.874: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:08.874: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:08.874: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:08.874: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:54:08.874: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container grafana ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:54:08.874: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:54:08.874: INFO: high from sched-preemption-1196 started at 2022-04-22 23:53:41 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.874: INFO: Container high ready: true, restart count 0 Apr 22 23:54:08.874: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:54:08.892: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:54:08.892: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:08.892: INFO: Container init ready: false, restart count 0 Apr 22 23:54:08.892: INFO: Container install ready: false, restart count 0 Apr 22 23:54:08.892: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:54:08.892: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:08.892: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:08.892: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:54:08.892: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:54:08.892: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:08.892: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:08.892: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:54:08.892: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:54:08.892: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:08.892: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:08.892: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:08.892: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:08.892: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:08.892: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:08.892: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:08.892: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:08.892: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:08.892: INFO: low-1 from sched-preemption-1196 started at 2022-04-22 23:53:44 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container low-1 ready: true, restart count 0 Apr 22 23:54:08.892: INFO: medium from sched-preemption-1196 started at 2022-04-22 23:54:00 +0000 UTC (1 container statuses recorded) Apr 22 23:54:08.892: INFO: Container medium ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d1f6b3f7826], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d1fb2368668], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d217b4304a5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5505/filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d21d129d816], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d21ee203a72], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 485.899577ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d21f51ef32b], Reason = [Created], Message = [Created container filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6] STEP: Considering event: Type = [Normal], Name = [filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6.16e85d21fc61dc98], Reason = [Started], Message = [Started container filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6] STEP: Considering event: Type = [Normal], Name = [without-label.16e85d1e7adef206], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5505/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16e85d1ed44fedb4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16e85d1eeb8f154e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 390.006454ms] STEP: Considering event: Type = [Normal], Name = [without-label.16e85d1ef2bf9559], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16e85d1efcb0325a], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16e85d1fc3fa6ea3], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-poddbc23a5e-ea51-4aa0-8533-64c17c4ff597.16e85d223826c17f], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:54:26.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5505" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.195 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":5,"skipped":852,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:54:26.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:54:26.054: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:54:26.062: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:54:26.064: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:54:26.076: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:54:26.076: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:26.076: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:26.076: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:54:26.076: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:26.076: INFO: Container init ready: false, restart count 0 Apr 22 23:54:26.076: INFO: Container install ready: false, restart count 0 Apr 22 23:54:26.076: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:54:26.076: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:26.076: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:26.076: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:54:26.076: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:54:26.076: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:26.076: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.076: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:26.076: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:26.076: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:26.076: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:26.076: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:26.076: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:26.076: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:26.076: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:26.076: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:54:26.076: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:54:26.076: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:54:26.076: INFO: Container grafana ready: true, restart count 0 Apr 22 23:54:26.077: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:54:26.077: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.077: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:54:26.077: INFO: filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6 from sched-pred-5505 started at 2022-04-22 23:54:21 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.077: INFO: Container filler-pod-d4db24d3-4f6d-46d2-9b8c-bd1d33e1c1a6 ready: true, restart count 0 Apr 22 23:54:26.077: INFO: high from sched-preemption-1196 started at 2022-04-22 23:53:41 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.077: INFO: Container high ready: false, restart count 0 Apr 22 23:54:26.077: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:54:26.096: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:54:26.096: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:26.096: INFO: Container init ready: false, restart count 0 Apr 22 23:54:26.096: INFO: Container install ready: false, restart count 0 Apr 22 23:54:26.096: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:54:26.096: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:26.096: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:26.096: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:54:26.096: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:54:26.096: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:26.096: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:26.096: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:54:26.096: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:54:26.096: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:26.096: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:26.096: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:26.096: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:26.096: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:26.096: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:26.096: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:26.096: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:26.096: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:26.096: INFO: low-1 from sched-preemption-1196 started at 2022-04-22 23:53:44 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.096: INFO: Container low-1 ready: false, restart count 0 Apr 22 23:54:26.096: INFO: medium from sched-preemption-1196 started at 2022-04-22 23:54:00 +0000 UTC (1 container statuses recorded) Apr 22 23:54:26.097: INFO: Container medium ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f4da52bd-35d7-463a-a5b6-9b7605a51664 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-f4da52bd-35d7-463a-a5b6-9b7605a51664 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f4da52bd-35d7-463a-a5b6-9b7605a51664 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:54:42.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1179" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":6,"skipped":1767,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:54:42.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:54:42.247: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:54:42.257: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:54:42.259: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:54:42.278: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:54:42.278: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:42.278: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:54:42.278: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:42.278: INFO: Container init ready: false, restart count 0 Apr 22 23:54:42.278: INFO: Container install ready: false, restart count 0 Apr 22 23:54:42.278: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:54:42.278: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:42.278: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:42.278: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:54:42.278: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:54:42.278: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:42.278: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:42.278: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:42.278: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:42.278: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:42.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:42.278: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:54:42.278: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container grafana ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:54:42.278: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:54:42.278: INFO: pod1 from sched-pred-1179 started at 2022-04-22 23:54:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container agnhost ready: true, restart count 0 Apr 22 23:54:42.278: INFO: pod2 from sched-pred-1179 started at 2022-04-22 23:54:34 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container agnhost ready: true, restart count 0 Apr 22 23:54:42.278: INFO: pod3 from sched-pred-1179 started at 2022-04-22 23:54:38 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.278: INFO: Container agnhost ready: true, restart count 0 Apr 22 23:54:42.278: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:54:42.306: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:54:42.306: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:42.306: INFO: Container init ready: false, restart count 0 Apr 22 23:54:42.306: INFO: Container install ready: false, restart count 0 Apr 22 23:54:42.306: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:54:42.306: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:42.306: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:42.306: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.306: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:54:42.306: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.306: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:54:42.306: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.306: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:42.306: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.306: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:42.306: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.306: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:54:42.306: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.306: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:54:42.307: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.307: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:42.307: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:42.307: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:42.307: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:42.307: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:42.307: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:42.307: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:42.307: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:42.307: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:42.307: INFO: Container node-exporter ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-12458f6d-7d94-4767-9bdd-7ed06622e55c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-12458f6d-7d94-4767-9bdd-7ed06622e55c off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-12458f6d-7d94-4767-9bdd-7ed06622e55c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:54:50.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1251" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":7,"skipped":2180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:54:50.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:54:50.415: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:54:50.424: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:54:50.430: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:54:50.440: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:54:50.440: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:50.440: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:54:50.440: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:50.440: INFO: Container init ready: false, restart count 0 Apr 22 23:54:50.440: INFO: Container install ready: false, restart count 0 Apr 22 23:54:50.440: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:54:50.440: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:50.440: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:50.440: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:54:50.440: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:54:50.440: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:50.440: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:50.440: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:50.440: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:50.440: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:50.440: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:50.440: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:54:50.440: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container grafana ready: true, restart count 0 Apr 22 23:54:50.440: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:54:50.440: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:54:50.440: INFO: pod1 from sched-pred-1179 started at 2022-04-22 23:54:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container agnhost ready: false, restart count 0 Apr 22 23:54:50.440: INFO: pod3 from sched-pred-1179 started at 2022-04-22 23:54:38 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.440: INFO: Container agnhost ready: false, restart count 0 Apr 22 23:54:50.440: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:54:50.450: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:54:50.450: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:50.451: INFO: Container init ready: false, restart count 0 Apr 22 23:54:50.451: INFO: Container install ready: false, restart count 0 Apr 22 23:54:50.451: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:54:50.451: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:50.451: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:50.451: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:54:50.451: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:54:50.451: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:50.451: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:50.451: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:54:50.451: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:54:50.451: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:50.451: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:50.451: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:50.451: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:50.451: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:50.451: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:50.451: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:50.451: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:50.451: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:50.451: INFO: with-labels from sched-pred-1251 started at 2022-04-22 23:54:46 +0000 UTC (1 container statuses recorded) Apr 22 23:54:50.451: INFO: Container with-labels ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16e85d2827b87f44], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:54:51.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1779" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":8,"skipped":2235,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:54:51.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:54:51.520: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:54:51.529: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:54:51.531: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:54:51.539: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:54:51.539: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:51.539: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:51.539: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:54:51.539: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:51.539: INFO: Container init ready: false, restart count 0 Apr 22 23:54:51.539: INFO: Container install ready: false, restart count 0 Apr 22 23:54:51.539: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:54:51.539: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:51.539: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:51.539: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:54:51.539: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:54:51.539: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:51.539: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.539: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:51.540: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:51.540: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:51.540: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:51.540: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:51.540: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:51.540: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:51.540: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:51.540: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:54:51.540: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:54:51.540: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:54:51.540: INFO: Container grafana ready: true, restart count 0 Apr 22 23:54:51.540: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:54:51.540: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.540: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:54:51.540: INFO: pod1 from sched-pred-1179 started at 2022-04-22 23:54:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.540: INFO: Container agnhost ready: false, restart count 0 Apr 22 23:54:51.540: INFO: pod3 from sched-pred-1179 started at 2022-04-22 23:54:38 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.540: INFO: Container agnhost ready: false, restart count 0 Apr 22 23:54:51.540: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:54:51.549: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:54:51.549: INFO: Container discover ready: false, restart count 0 Apr 22 23:54:51.549: INFO: Container init ready: false, restart count 0 Apr 22 23:54:51.549: INFO: Container install ready: false, restart count 0 Apr 22 23:54:51.549: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:54:51.549: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:54:51.549: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:54:51.549: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:54:51.549: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:54:51.549: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:54:51.549: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:54:51.549: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:54:51.549: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:54:51.549: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:54:51.549: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:54:51.549: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:54:51.549: INFO: Container collectd ready: true, restart count 0 Apr 22 23:54:51.549: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:54:51.549: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:54:51.549: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:54:51.549: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:54:51.549: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:54:51.549: INFO: with-labels from sched-pred-1251 started at 2022-04-22 23:54:46 +0000 UTC (1 container statuses recorded) Apr 22 23:54:51.549: INFO: Container with-labels ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Apr 22 23:54:57.656: INFO: Pod cmk-2vd7z requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod cmk-vdkxb requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod cmk-webhook-6c9d5f8578-nmxns requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod kube-flannel-2kskh requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod kube-flannel-l4rjs requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod kube-multus-ds-amd64-kjrqq requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod kube-multus-ds-amd64-x8jqs requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod kube-proxy-jvkvz requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod kube-proxy-v8fdh requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod kubernetes-dashboard-785dcbb76d-bxmz8 requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod kubernetes-metrics-scraper-5558854cb-kdpvp requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod node-feature-discovery-worker-2hkr5 requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod node-feature-discovery-worker-bktph requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod collectd-g2c8k requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod collectd-ptpbz requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod node-exporter-9zzfv requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod node-exporter-c4bhs requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-8ns7g requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod pod1 requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod pod3 requesting local ephemeral resource =0 on Node node1 Apr 22 23:54:57.656: INFO: Pod with-labels requesting local ephemeral resource =0 on Node node2 Apr 22 23:54:57.656: INFO: Using pod capacity: 40608090249 Apr 22 23:54:57.656: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 Apr 22 23:54:57.656: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Apr 22 23:54:57.846: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16e85d29d40fbce9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16e85d2bf9c916c1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16e85d2c3a5c1a24], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.083370682s] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16e85d2c52cf54c9], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16e85d2c60f47930], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16e85d29d4897e04], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16e85d2b1ffbd1e7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16e85d2b600d6c6a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.074884478s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16e85d2b719ba13c], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16e85d2bb6e3adbf], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16e85d29d987c1a0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16e85d2c586b7d04], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16e85d2c7d03b3ce], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 613.943736ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16e85d2c8376ea1d], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16e85d2c89fdf6da], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16e85d29da1b946a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16e85d2af45badc6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16e85d2b08a85244], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 340.557265ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16e85d2b29a377de], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16e85d2b65f8533d], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16e85d29dab03e8e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16e85d2bda29a4b4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16e85d2bfcd6de03], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 581.771686ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16e85d2c03bf333b], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16e85d2c0abf79d8], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16e85d29db46e18a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16e85d2c586ef323], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16e85d2c8ebf3bbb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 911.213092ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16e85d2c954fadf6], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16e85d2c9c39372b], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16e85d29dbc64f72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16e85d2bf5bff4a6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16e85d2c1d9d9eea], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 668.831371ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16e85d2c3162ae26], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16e85d2c60eb7348], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16e85d29dc59629b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16e85d2bda23ae4f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16e85d2beb112547], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 283.991442ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16e85d2bf1f752ca], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16e85d2bf8863609], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16e85d29dcde8274], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16e85d2b23350bba], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16e85d2b43818687], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 541.87597ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16e85d2b64c0b3b3], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16e85d2bc148933c], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16e85d29dd5c1711], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16e85d2bdb2b485e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16e85d2c213518a2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.175042812s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16e85d2c282ca41a], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16e85d2c2f6e3e62], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16e85d29ddf4b0a4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16e85d2bdcc1ed1c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16e85d2c38da17be], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.545083223s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16e85d2c3f9677f5], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16e85d2c46a0de76], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16e85d29deaa8dea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16e85d2bdafea877], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16e85d2c0f9174da], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 882.030565ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16e85d2c1624d80b], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16e85d2c1cff0437], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16e85d29d5107769], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16e85d2b1d251a45], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16e85d2b2fb04cc9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 311.104807ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16e85d2b46dc0754], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16e85d2ba690fcb9], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16e85d29d59b0fdf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16e85d2b26ac43e2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16e85d2b7712fa56], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.348902904s] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16e85d2b8f75ff99], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16e85d2bf6fcd390], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16e85d29d61ff92b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16e85d2a4fadd9a0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16e85d2a6709017e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 391.840815ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16e85d2a8785658c], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16e85d2b0d77c646], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16e85d29d6aa74e1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16e85d2bacf0a9e4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16e85d2bc02c5202], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 322.670897ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16e85d2bed65bb11], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16e85d2c1d9f94b9], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16e85d29d73bba14], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16e85d2c54f3f678], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16e85d2c6aa7c34c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 364.100304ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16e85d2c71d49625], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16e85d2c78bd9ef8], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16e85d29d7d1fcfd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-7 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16e85d2af612aea3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16e85d2b1b2a41ba], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 622.290864ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16e85d2b2e1766a2], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16e85d2b7a61103a], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16e85d29d85dcd9d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16e85d2c1d9f8abe], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16e85d2c4b33ebb1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 764.693695ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16e85d2c5b74d635], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16e85d2c63b69fc5], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16e85d29d9083db5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4507/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16e85d2bf4a1829d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16e85d2c06f31248], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 307.328322ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16e85d2c1d90459d], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16e85d2c5b1ab178], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16e85d2d60cdf9c7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:55:13.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4507" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:22.443 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":9,"skipped":2240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:55:13.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 22 23:55:13.970: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 23:56:14.025: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:56:14.029: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:56:14.050: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:56:14.050: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:56:14.050: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:56:14.050: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:56:14.065: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:56:14.065: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:56:14.065: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.065: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:56:14.065: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:56:14.065: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:56:14.065: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.065: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:56:14.065: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:56:14.065: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:56:14.065: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:56:14.065: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.065: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:56:14.065: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:56:14.065: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:56:14.065: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:56:14.065: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:56:14.065: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.065: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.065: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:56:14.065: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:56:14.065: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Apr 22 23:56:14.081: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:56:14.081: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:56:14.081: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.081: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:56:14.082: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:56:14.082: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:56:14.082: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.082: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:56:14.082: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:56:14.082: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:56:14.082: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:56:14.082: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.082: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:56:14.082: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:56:14.082: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:56:14.082: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:56:14.082: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:56:14.082: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:56:14.082: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:56:14.082: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:56:14.082: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:56:14.082: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Apr 22 23:56:14.098: INFO: Waiting for running... Apr 22 23:56:14.099: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:56:19.178: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:56:19.178: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:56:19.178: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:56:19.178: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:56:19.178: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:56:19.178: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:56:19.178: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:56:19.178: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:56:19.178: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: 99f5e8e7-a47b-4574-831d-238e8b3cf080-0, Cpu: 37613, Mem: 87744079872 Apr 22 23:56:19.178: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 22 23:56:19.178: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:56:19.178: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:56:19.178: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:56:19.178: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:56:19.178: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:56:19.178: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:56:19.178: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:56:19.178: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:56:19.178: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:56:19.178: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:56:19.178: INFO: Pod for on the node: 2c23f11e-3083-41db-a606-02a7b3765e03-0, Cpu: 37963, Mem: 88885940224 Apr 22 23:56:19.178: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 22 23:56:19.178: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-487a6d38-c0b0-44be-a719=testing-taint-value-ac2fe128-3082-43f8-a1f4-a2b7518b9ed8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-242182f7-18a4-426a-bd59=testing-taint-value-04e12554-be8e-46ef-80fa-44e3b0e28196:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-49e52733-e669-435c-94b2=testing-taint-value-00b6193d-f95e-4895-870c-ed3ed2cd2a9d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1f798b08-164b-4207-a19f=testing-taint-value-769fd55e-de4f-45f2-be75-c7ebfdc413f5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0cf2b10a-870c-4293-9504=testing-taint-value-48223600-b788-4be0-b486-55d0db19dcc0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bea799ce-6320-4a90-b5bd=testing-taint-value-21a030b3-ba94-4088-be2d-bcd23742b408:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bebd8bc6-9afc-4675-a168=testing-taint-value-165977a9-ae0d-49fb-afd6-8bb954931493:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5dfffe47-8a37-48c6-ae67=testing-taint-value-549e82c9-e4d1-406a-97cb-c677bbeba5ef:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ab6a12e3-85e9-495a-b2f7=testing-taint-value-7bde6e98-46e1-4db7-8df5-3541eb536705:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e5f6fe70-ec2a-4da5-8625=testing-taint-value-daa75fc5-d5ba-4e64-b21b-1913c1afb97b:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-901002cf-ecab-482b-9233=testing-taint-value-f1d1ceda-a3c9-4d2b-98d8-b48eba54b51d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c6a7ecf6-12c2-44fa-9791=testing-taint-value-968991ba-aca9-45c5-881c-2348d13f6b65:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ecad9a51-7925-4564-adf4=testing-taint-value-a0d84ca3-1900-4134-a34c-c4702207da5f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-aa446cbf-51b7-4a87-a619=testing-taint-value-b2fb1e45-5631-4731-82b8-392cf089b0a5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4e143726-d407-437c-b092=testing-taint-value-ed728e6d-106b-46f9-9901-e78261088122:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9d76b790-8edd-439a-b62a=testing-taint-value-f35ddee2-6af8-4892-a6dd-18eacee1ec69:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-16f05c02-56b7-4c2d-b5eb=testing-taint-value-d7ca935b-436e-41f6-b1ef-2e17e1226267:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c922aef5-6ff9-4bfb-b081=testing-taint-value-cabb1ed2-d99f-4f9f-bf78-b724d4f25d8b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d2ebda63-cd62-4b93-b239=testing-taint-value-3d878127-ad75-4963-8344-d336ceed0c94:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0fd01d4e-3190-40d7-b6ed=testing-taint-value-3adddda7-e014-41ae-8bfd-211dd81c35bb:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-901002cf-ecab-482b-9233=testing-taint-value-f1d1ceda-a3c9-4d2b-98d8-b48eba54b51d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c6a7ecf6-12c2-44fa-9791=testing-taint-value-968991ba-aca9-45c5-881c-2348d13f6b65:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ecad9a51-7925-4564-adf4=testing-taint-value-a0d84ca3-1900-4134-a34c-c4702207da5f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-aa446cbf-51b7-4a87-a619=testing-taint-value-b2fb1e45-5631-4731-82b8-392cf089b0a5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4e143726-d407-437c-b092=testing-taint-value-ed728e6d-106b-46f9-9901-e78261088122:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9d76b790-8edd-439a-b62a=testing-taint-value-f35ddee2-6af8-4892-a6dd-18eacee1ec69:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-16f05c02-56b7-4c2d-b5eb=testing-taint-value-d7ca935b-436e-41f6-b1ef-2e17e1226267:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c922aef5-6ff9-4bfb-b081=testing-taint-value-cabb1ed2-d99f-4f9f-bf78-b724d4f25d8b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d2ebda63-cd62-4b93-b239=testing-taint-value-3d878127-ad75-4963-8344-d336ceed0c94:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0fd01d4e-3190-40d7-b6ed=testing-taint-value-3adddda7-e014-41ae-8bfd-211dd81c35bb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-487a6d38-c0b0-44be-a719=testing-taint-value-ac2fe128-3082-43f8-a1f4-a2b7518b9ed8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-242182f7-18a4-426a-bd59=testing-taint-value-04e12554-be8e-46ef-80fa-44e3b0e28196:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-49e52733-e669-435c-94b2=testing-taint-value-00b6193d-f95e-4895-870c-ed3ed2cd2a9d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1f798b08-164b-4207-a19f=testing-taint-value-769fd55e-de4f-45f2-be75-c7ebfdc413f5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0cf2b10a-870c-4293-9504=testing-taint-value-48223600-b788-4be0-b486-55d0db19dcc0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bea799ce-6320-4a90-b5bd=testing-taint-value-21a030b3-ba94-4088-be2d-bcd23742b408:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bebd8bc6-9afc-4675-a168=testing-taint-value-165977a9-ae0d-49fb-afd6-8bb954931493:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5dfffe47-8a37-48c6-ae67=testing-taint-value-549e82c9-e4d1-406a-97cb-c677bbeba5ef:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ab6a12e3-85e9-495a-b2f7=testing-taint-value-7bde6e98-46e1-4db7-8df5-3541eb536705:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e5f6fe70-ec2a-4da5-8625=testing-taint-value-daa75fc5-d5ba-4e64-b21b-1913c1afb97b:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:56:38.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5614" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:84.573 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":10,"skipped":2821,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:56:38.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Apr 22 23:56:38.567: INFO: Waiting up to 1m0s for all nodes to be ready Apr 22 23:57:38.620: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:57:38.623: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 23:57:38.643: INFO: The status of Pod cmk-init-discover-node1-7s78z is Succeeded, skipping waiting Apr 22 23:57:38.643: INFO: The status of Pod cmk-init-discover-node2-2m4dr is Succeeded, skipping waiting Apr 22 23:57:38.643: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 23:57:38.643: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 22 23:57:38.658: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:57:38.658: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:57:38.658: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.658: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:57:38.658: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:57:38.658: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:57:38.658: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.658: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:57:38.658: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:57:38.658: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:57:38.658: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:57:38.658: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.658: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:57:38.658: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:57:38.658: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:57:38.658: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:57:38.658: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:57:38.658: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.658: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.658: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:57:38.658: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:57:38.658: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Apr 22 23:57:38.675: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:57:38.675: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:57:38.675: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.675: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:57:38.675: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:57:38.675: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:57:38.675: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.675: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:57:38.675: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Apr 22 23:57:38.675: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Apr 22 23:57:38.675: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:57:38.675: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.675: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:57:38.675: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:57:38.675: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:57:38.675: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:57:38.675: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:57:38.675: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:57:38.675: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:57:38.675: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:57:38.675: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Apr 22 23:57:38.675: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Apr 22 23:57:38.690: INFO: Waiting for running... Apr 22 23:57:38.692: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:57:43.763: INFO: ComputeCPUMemFraction for node: node1 Apr 22 23:57:43.763: INFO: Pod for on the node: cmk-2vd7z, Cpu: 200, Mem: 419430400 Apr 22 23:57:43.763: INFO: Pod for on the node: cmk-init-discover-node1-7s78z, Cpu: 300, Mem: 629145600 Apr 22 23:57:43.763: INFO: Pod for on the node: kube-flannel-l4rjs, Cpu: 150, Mem: 64000000 Apr 22 23:57:43.763: INFO: Pod for on the node: kube-multus-ds-amd64-x8jqs, Cpu: 100, Mem: 94371840 Apr 22 23:57:43.763: INFO: Pod for on the node: kube-proxy-v8fdh, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-kdpvp, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Apr 22 23:57:43.763: INFO: Pod for on the node: node-feature-discovery-worker-2hkr5, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: collectd-g2c8k, Cpu: 300, Mem: 629145600 Apr 22 23:57:43.763: INFO: Pod for on the node: node-exporter-9zzfv, Cpu: 112, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Apr 22 23:57:43.763: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: ad8ced17-caf1-41e9-9d55-eea3fa457c50-0, Cpu: 37613, Mem: 87744079872 Apr 22 23:57:43.763: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 22 23:57:43.763: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Apr 22 23:57:43.763: INFO: ComputeCPUMemFraction for node: node2 Apr 22 23:57:43.763: INFO: Pod for on the node: cmk-init-discover-node2-2m4dr, Cpu: 300, Mem: 629145600 Apr 22 23:57:43.763: INFO: Pod for on the node: cmk-vdkxb, Cpu: 200, Mem: 419430400 Apr 22 23:57:43.763: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-nmxns, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: kube-flannel-2kskh, Cpu: 150, Mem: 64000000 Apr 22 23:57:43.763: INFO: Pod for on the node: kube-multus-ds-amd64-kjrqq, Cpu: 100, Mem: 94371840 Apr 22 23:57:43.763: INFO: Pod for on the node: kube-proxy-jvkvz, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-bxmz8, Cpu: 50, Mem: 64000000 Apr 22 23:57:43.763: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Apr 22 23:57:43.763: INFO: Pod for on the node: node-feature-discovery-worker-bktph, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd, Cpu: 100, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: collectd-ptpbz, Cpu: 300, Mem: 629145600 Apr 22 23:57:43.763: INFO: Pod for on the node: node-exporter-c4bhs, Cpu: 112, Mem: 209715200 Apr 22 23:57:43.763: INFO: Pod for on the node: 5181b4b4-ad61-4e30-ad82-0f04c5168ca7-0, Cpu: 37963, Mem: 88885940224 Apr 22 23:57:43.763: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Apr 22 23:57:43.763: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3241 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3241, will wait for the garbage collector to delete the pods Apr 22 23:57:49.953: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 3.340705ms Apr 22 23:57:50.054: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.966565ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:58:01.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3241" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:83.443 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":11,"skipped":4209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:58:02.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:58:02.024: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:58:02.032: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:58:02.046: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:58:02.056: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:58:02.056: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:58:02.056: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:58:02.056: INFO: Container discover ready: false, restart count 0 Apr 22 23:58:02.056: INFO: Container init ready: false, restart count 0 Apr 22 23:58:02.056: INFO: Container install ready: false, restart count 0 Apr 22 23:58:02.056: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:58:02.056: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:58:02.056: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:58:02.056: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:58:02.056: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:58:02.056: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:58:02.056: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:58:02.056: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:58:02.056: INFO: Container collectd ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:58:02.056: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:58:02.056: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:58:02.056: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:58:02.056: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container grafana ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:58:02.056: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.056: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:58:02.056: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:58:02.064: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:58:02.064: INFO: Container discover ready: false, restart count 0 Apr 22 23:58:02.064: INFO: Container init ready: false, restart count 0 Apr 22 23:58:02.064: INFO: Container install ready: false, restart count 0 Apr 22 23:58:02.064: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:58:02.064: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:58:02.064: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:58:02.064: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:58:02.064: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:58:02.064: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:58:02.064: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:58:02.064: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:58:02.064: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:58:02.064: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:58:02.064: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:58:02.064: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:58:02.064: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:58:02.064: INFO: Container collectd ready: true, restart count 0 Apr 22 23:58:02.064: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:58:02.064: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:58:02.064: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:58:02.064: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:58:02.064: INFO: Container node-exporter ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:58:16.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6905" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.184 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":12,"skipped":5143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 22 23:58:16.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Apr 22 23:58:16.224: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 22 23:58:16.232: INFO: Waiting for terminating namespaces to be deleted... Apr 22 23:58:16.238: INFO: Logging pods the apiserver thinks is on node node1 before test Apr 22 23:58:16.257: INFO: cmk-2vd7z from kube-system started at 2022-04-22 20:12:29 +0000 UTC (2 container statuses recorded) Apr 22 23:58:16.258: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:58:16.258: INFO: cmk-init-discover-node1-7s78z from kube-system started at 2022-04-22 20:11:46 +0000 UTC (3 container statuses recorded) Apr 22 23:58:16.258: INFO: Container discover ready: false, restart count 0 Apr 22 23:58:16.258: INFO: Container init ready: false, restart count 0 Apr 22 23:58:16.258: INFO: Container install ready: false, restart count 0 Apr 22 23:58:16.258: INFO: kube-flannel-l4rjs from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container kube-flannel ready: true, restart count 3 Apr 22 23:58:16.258: INFO: kube-multus-ds-amd64-x8jqs from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:58:16.258: INFO: kube-proxy-v8fdh from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:58:16.258: INFO: kubernetes-metrics-scraper-5558854cb-kdpvp from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Apr 22 23:58:16.258: INFO: nginx-proxy-node1 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container nginx-proxy ready: true, restart count 2 Apr 22 23:58:16.258: INFO: node-feature-discovery-worker-2hkr5 from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:58:16.258: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sfgsh from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:58:16.258: INFO: collectd-g2c8k from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:58:16.258: INFO: Container collectd ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:58:16.258: INFO: node-exporter-9zzfv from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:58:16.258: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:58:16.258: INFO: prometheus-k8s-0 from monitoring started at 2022-04-22 20:13:52 +0000 UTC (4 container statuses recorded) Apr 22 23:58:16.258: INFO: Container config-reloader ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container grafana ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Container prometheus ready: true, restart count 1 Apr 22 23:58:16.258: INFO: tas-telemetry-aware-scheduling-84ff454dfb-8ns7g from monitoring started at 2022-04-22 20:16:40 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container tas-extender ready: true, restart count 0 Apr 22 23:58:16.258: INFO: rs-e2e-pts-filter-vx9cb from sched-pred-6905 started at 2022-04-22 23:58:10 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 22 23:58:16.258: INFO: rs-e2e-pts-filter-xjdrr from sched-pred-6905 started at 2022-04-22 23:58:10 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.258: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 22 23:58:16.258: INFO: Logging pods the apiserver thinks is on node node2 before test Apr 22 23:58:16.271: INFO: cmk-init-discover-node2-2m4dr from kube-system started at 2022-04-22 20:12:06 +0000 UTC (3 container statuses recorded) Apr 22 23:58:16.271: INFO: Container discover ready: false, restart count 0 Apr 22 23:58:16.271: INFO: Container init ready: false, restart count 0 Apr 22 23:58:16.271: INFO: Container install ready: false, restart count 0 Apr 22 23:58:16.271: INFO: cmk-vdkxb from kube-system started at 2022-04-22 20:12:30 +0000 UTC (2 container statuses recorded) Apr 22 23:58:16.271: INFO: Container nodereport ready: true, restart count 0 Apr 22 23:58:16.271: INFO: Container reconcile ready: true, restart count 0 Apr 22 23:58:16.271: INFO: cmk-webhook-6c9d5f8578-nmxns from kube-system started at 2022-04-22 20:12:30 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container cmk-webhook ready: true, restart count 0 Apr 22 23:58:16.271: INFO: kube-flannel-2kskh from kube-system started at 2022-04-22 19:59:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container kube-flannel ready: true, restart count 2 Apr 22 23:58:16.271: INFO: kube-multus-ds-amd64-kjrqq from kube-system started at 2022-04-22 19:59:42 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container kube-multus ready: true, restart count 1 Apr 22 23:58:16.271: INFO: kube-proxy-jvkvz from kube-system started at 2022-04-22 19:58:37 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container kube-proxy ready: true, restart count 2 Apr 22 23:58:16.271: INFO: kubernetes-dashboard-785dcbb76d-bxmz8 from kube-system started at 2022-04-22 20:00:14 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container kubernetes-dashboard ready: true, restart count 1 Apr 22 23:58:16.271: INFO: nginx-proxy-node2 from kube-system started at 2022-04-22 19:58:33 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container nginx-proxy ready: true, restart count 1 Apr 22 23:58:16.271: INFO: node-feature-discovery-worker-bktph from kube-system started at 2022-04-22 20:08:13 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container nfd-worker ready: true, restart count 0 Apr 22 23:58:16.271: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-vrptd from kube-system started at 2022-04-22 20:09:26 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container kube-sriovdp ready: true, restart count 0 Apr 22 23:58:16.271: INFO: collectd-ptpbz from monitoring started at 2022-04-22 20:17:31 +0000 UTC (3 container statuses recorded) Apr 22 23:58:16.271: INFO: Container collectd ready: true, restart count 0 Apr 22 23:58:16.271: INFO: Container collectd-exporter ready: true, restart count 0 Apr 22 23:58:16.271: INFO: Container rbac-proxy ready: true, restart count 0 Apr 22 23:58:16.271: INFO: node-exporter-c4bhs from monitoring started at 2022-04-22 20:13:34 +0000 UTC (2 container statuses recorded) Apr 22 23:58:16.271: INFO: Container kube-rbac-proxy ready: true, restart count 0 Apr 22 23:58:16.271: INFO: Container node-exporter ready: true, restart count 0 Apr 22 23:58:16.271: INFO: rs-e2e-pts-filter-k97hd from sched-pred-6905 started at 2022-04-22 23:58:10 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container e2e-pts-filter ready: true, restart count 0 Apr 22 23:58:16.271: INFO: rs-e2e-pts-filter-xl65b from sched-pred-6905 started at 2022-04-22 23:58:10 +0000 UTC (1 container statuses recorded) Apr 22 23:58:16.271: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-4b8f1364-9ba7-46e9-99a6-286703d6f7d1=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-3a24a394-1650-442e-80d8-19ca56d5a14e testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-3a24a394-1650-442e-80d8-19ca56d5a14e off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-3a24a394-1650-442e-80d8-19ca56d5a14e STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-4b8f1364-9ba7-46e9-99a6-286703d6f7d1=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 23:58:26.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6070" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.189 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":13,"skipped":5701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSApr 22 23:58:26.389: INFO: Running AfterSuite actions on all nodes Apr 22 23:58:26.389: INFO: Running AfterSuite actions on node 1 Apr 22 23:58:26.389: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 543.026 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 9m4.419458435s Test Suite Passed