I1122 03:07:08.978588 22 e2e.go:129] Starting e2e run "834b6e29-8772-40ad-b873-c3d9d38f8b2d" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1637550427 - Will randomize all specs Will run 13 of 5770 specs Nov 22 03:07:08.993: INFO: >>> kubeConfig: /root/.kube/config Nov 22 03:07:08.998: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 22 03:07:09.026: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 03:07:09.088: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 03:07:09.088: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 03:07:09.088: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 03:07:09.088: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 03:07:09.088: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 22 03:07:09.105: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 22 03:07:09.105: INFO: e2e test version: v1.21.5 Nov 22 03:07:09.106: INFO: kube-apiserver version: v1.21.1 Nov 22 03:07:09.106: INFO: >>> kubeConfig: /root/.kube/config Nov 22 03:07:09.112: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:07:09.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority W1122 03:07:09.148344 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 22 03:07:09.148: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 22 03:07:09.151: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 22 03:07:09.153: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 03:08:09.204: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:08:09.208: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 03:08:09.225: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 03:08:09.225: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 03:08:09.225: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 03:08:09.225: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 03:08:09.242: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:08:09.242: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:08:09.242: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.242: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:08:09.242: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Nov 22 03:08:09.260: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:08:09.260: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:08:09.260: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:08:09.260: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:08:09.260: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 22 03:08:09.276: INFO: Waiting for running... Nov 22 03:08:09.277: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:08:14.345: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 22 03:08:14.345: INFO: Node: node1, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:08:14.345: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Pod for on the node: 846da4c9-f43d-4b62-b090-e4e6158421b3-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:08:14.345: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 22 03:08:14.345: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884632576, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8de1af0f-d367-49ed-b5f0=testing-taint-value-750fe5d9-0483-4074-bea5-42c109784c8c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7a2da88b-782f-4e8a-b48c=testing-taint-value-035a3710-14f6-42d3-8234-44f96de352ec:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1e7b740d-a453-4541-b467=testing-taint-value-f7c7412c-e824-4a3c-9c0f-f6c3289249cd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-428efd4c-cc23-4420-b8f7=testing-taint-value-5f6a4605-27a6-41b3-b811-497e5bd7b9db:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4c5b8187-0f5f-4d37-b267=testing-taint-value-57dbe76d-870c-4bbf-b1ea-d02a246dfb32:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-344198a8-f00e-48e5-8935=testing-taint-value-075e7ce3-4947-4a9d-9728-701a7a5a9b3f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7621e49c-68eb-4dbf-a779=testing-taint-value-bb4f8ba6-8503-412e-b8af-a2293b1306c5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1f1a2882-9ce5-47c7-8495=testing-taint-value-6636b50e-19d3-43c8-908a-9315f4ba094d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8debfed8-c7f9-4764-b024=testing-taint-value-753c9235-f086-474b-9d4f-e4186d87e967:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3c13dbcf-2fd8-4999-8f0d=testing-taint-value-3cc8a64e-af46-4fe6-8f7c-3a95c591c79b:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1ffa2120-47d7-4659-b18c=testing-taint-value-ceda62d1-0dad-4400-afab-ab4542df5fb1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7abf695b-7dd9-4c3e-b8ba=testing-taint-value-655f03f7-d665-45f4-b1b0-226b70fa2b9a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5323a7a5-7ca0-450a-8b4b=testing-taint-value-6045e9f7-da9d-4b03-8b90-8db37e50c699:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-37177de8-9acb-4452-a3b5=testing-taint-value-b49e03ef-c309-4131-89ce-2693202bfd2f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d7bf619c-b925-409c-a767=testing-taint-value-5e6c96e1-fcca-43fe-b239-ebb78befb847:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-30a8c65e-5123-4cdd-bc12=testing-taint-value-5a3b3187-7a51-465a-b4ee-1f324bb0e9c1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ff702884-8527-4896-b057=testing-taint-value-ab233a03-2a48-4ba3-981b-a85b752770be:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b6b9a627-ad27-4d46-9bfa=testing-taint-value-4d9c7662-4668-46a0-89ef-8674bfc6951d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-aeaf1d22-a0b6-4c58-ac1f=testing-taint-value-18e8524c-7663-4fe0-b797-a4355558cc7a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-29f1944e-2216-4e7c-bcbe=testing-taint-value-7416d5c3-9ba8-4d73-a991-a42dbf3f1cad:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1ffa2120-47d7-4659-b18c=testing-taint-value-ceda62d1-0dad-4400-afab-ab4542df5fb1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7abf695b-7dd9-4c3e-b8ba=testing-taint-value-655f03f7-d665-45f4-b1b0-226b70fa2b9a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5323a7a5-7ca0-450a-8b4b=testing-taint-value-6045e9f7-da9d-4b03-8b90-8db37e50c699:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-37177de8-9acb-4452-a3b5=testing-taint-value-b49e03ef-c309-4131-89ce-2693202bfd2f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d7bf619c-b925-409c-a767=testing-taint-value-5e6c96e1-fcca-43fe-b239-ebb78befb847:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-30a8c65e-5123-4cdd-bc12=testing-taint-value-5a3b3187-7a51-465a-b4ee-1f324bb0e9c1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ff702884-8527-4896-b057=testing-taint-value-ab233a03-2a48-4ba3-981b-a85b752770be:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b6b9a627-ad27-4d46-9bfa=testing-taint-value-4d9c7662-4668-46a0-89ef-8674bfc6951d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-aeaf1d22-a0b6-4c58-ac1f=testing-taint-value-18e8524c-7663-4fe0-b797-a4355558cc7a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-29f1944e-2216-4e7c-bcbe=testing-taint-value-7416d5c3-9ba8-4d73-a991-a42dbf3f1cad:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8de1af0f-d367-49ed-b5f0=testing-taint-value-750fe5d9-0483-4074-bea5-42c109784c8c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7a2da88b-782f-4e8a-b48c=testing-taint-value-035a3710-14f6-42d3-8234-44f96de352ec:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1e7b740d-a453-4541-b467=testing-taint-value-f7c7412c-e824-4a3c-9c0f-f6c3289249cd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-428efd4c-cc23-4420-b8f7=testing-taint-value-5f6a4605-27a6-41b3-b811-497e5bd7b9db:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4c5b8187-0f5f-4d37-b267=testing-taint-value-57dbe76d-870c-4bbf-b1ea-d02a246dfb32:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-344198a8-f00e-48e5-8935=testing-taint-value-075e7ce3-4947-4a9d-9728-701a7a5a9b3f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7621e49c-68eb-4dbf-a779=testing-taint-value-bb4f8ba6-8503-412e-b8af-a2293b1306c5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1f1a2882-9ce5-47c7-8495=testing-taint-value-6636b50e-19d3-43c8-908a-9315f4ba094d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8debfed8-c7f9-4764-b024=testing-taint-value-753c9235-f086-474b-9d4f-e4186d87e967:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3c13dbcf-2fd8-4999-8f0d=testing-taint-value-3cc8a64e-af46-4fe6-8f7c-3a95c591c79b:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:08:23.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8521" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:74.583 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":1,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:08:23.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 22 03:08:23.733: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 03:09:23.792: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:10:04.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5128" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:100.390 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":2,"skipped":529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:10:04.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:10:04.120: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:10:04.128: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:10:04.131: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:10:04.141: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:10:04.141: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:04.141: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:10:04.141: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:04.141: INFO: Container init ready: false, restart count 0 Nov 22 03:10:04.141: INFO: Container install ready: false, restart count 0 Nov 22 03:10:04.141: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:10:04.141: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:04.141: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:10:04.141: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:10:04.141: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:04.141: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:04.141: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:04.141: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:04.141: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:04.141: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:04.141: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:04.141: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:10:04.141: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container grafana ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:10:04.141: INFO: high from sched-preemption-5128 started at 2021-11-22 03:09:35 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.141: INFO: Container high ready: true, restart count 0 Nov 22 03:10:04.141: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:10:04.150: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:10:04.150: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:04.150: INFO: Container init ready: false, restart count 0 Nov 22 03:10:04.150: INFO: Container install ready: false, restart count 0 Nov 22 03:10:04.150: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:10:04.150: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:04.150: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:04.150: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:10:04.150: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:10:04.150: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:04.150: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:10:04.150: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:10:04.150: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:04.150: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:04.150: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:04.150: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:04.150: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:04.150: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:04.150: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:04.150: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:04.150: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:04.150: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:04.150: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:10:04.150: INFO: low-1 from sched-preemption-5128 started at 2021-11-22 03:09:41 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container low-1 ready: true, restart count 0 Nov 22 03:10:04.150: INFO: medium from sched-preemption-5128 started at 2021-11-22 03:09:57 +0000 UTC (1 container statuses recorded) Nov 22 03:10:04.150: INFO: Container medium ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:10:16.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7640" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":3,"skipped":929,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:10:16.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:10:16.306: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:10:16.315: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:10:16.317: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:10:16.328: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:10:16.328: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:16.328: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:10:16.328: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:16.328: INFO: Container init ready: false, restart count 0 Nov 22 03:10:16.328: INFO: Container install ready: false, restart count 0 Nov 22 03:10:16.328: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:10:16.328: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:16.328: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:10:16.328: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:10:16.328: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:16.328: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:16.328: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:16.328: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:16.328: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:16.328: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:16.328: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:16.328: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:10:16.328: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container grafana ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:10:16.328: INFO: rs-e2e-pts-filter-85x5m from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:16.328: INFO: rs-e2e-pts-filter-rfgzm from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.328: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:16.328: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:10:16.335: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:10:16.335: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:16.335: INFO: Container init ready: false, restart count 0 Nov 22 03:10:16.335: INFO: Container install ready: false, restart count 0 Nov 22 03:10:16.335: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:10:16.335: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:16.335: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:16.335: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:10:16.335: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:10:16.335: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:16.335: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:10:16.335: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:10:16.335: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:16.335: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:16.335: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:16.335: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:16.335: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:16.335: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:16.335: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:16.335: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:16.335: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:16.335: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:16.335: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:10:16.335: INFO: rs-e2e-pts-filter-mwwdm from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:16.335: INFO: rs-e2e-pts-filter-rxb7c from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:16.335: INFO: medium from sched-preemption-5128 started at 2021-11-22 03:09:57 +0000 UTC (1 container statuses recorded) Nov 22 03:10:16.335: INFO: Container medium ready: false, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e33b22ac-7236-4a64-a51d-a2fcccf1a4b7=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-eb941d0b-da56-4c18-b5d0-f5051488da3d testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c05ca5aac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9635/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c5c484830], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c6d797232], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 288.428906ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c733e89f2], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c7a821251], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9cf5671667], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b9bf9cf758adb3], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-e33b22ac-7236-4a64-a51d-a2fcccf1a4b7: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b9bf9cf758adb3], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-e33b22ac-7236-4a64-a51d-a2fcccf1a4b7: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c05ca5aac], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9635/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c5c484830], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c6d797232], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 288.428906ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c733e89f2], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9c7a821251], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b9bf9cf5671667], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e33b22ac-7236-4a64-a51d-a2fcccf1a4b7=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b9bf9d4fa79c7c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9635/still-no-tolerations to node1] STEP: removing the label kubernetes.io/e2e-label-key-eb941d0b-da56-4c18-b5d0-f5051488da3d off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-eb941d0b-da56-4c18-b5d0-f5051488da3d STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e33b22ac-7236-4a64-a51d-a2fcccf1a4b7=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:10:22.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9635" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.176 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":4,"skipped":1938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:10:22.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:10:22.485: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:10:22.492: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:10:22.495: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:10:22.504: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:10:22.504: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:22.504: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:22.504: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:10:22.504: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:22.504: INFO: Container init ready: false, restart count 0 Nov 22 03:10:22.504: INFO: Container install ready: false, restart count 0 Nov 22 03:10:22.504: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:10:22.504: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:22.504: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:10:22.504: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:10:22.504: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:22.504: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:22.504: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.504: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:22.504: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:22.504: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:22.504: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:22.504: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:22.504: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:22.504: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:22.504: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:22.504: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:10:22.504: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:10:22.504: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:10:22.504: INFO: Container grafana ready: true, restart count 0 Nov 22 03:10:22.505: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:10:22.505: INFO: rs-e2e-pts-filter-85x5m from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.505: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:22.505: INFO: rs-e2e-pts-filter-rfgzm from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.505: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:22.505: INFO: still-no-tolerations from sched-pred-9635 started at 2021-11-22 03:10:21 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.505: INFO: Container still-no-tolerations ready: false, restart count 0 Nov 22 03:10:22.505: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:10:22.520: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:10:22.520: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:22.520: INFO: Container init ready: false, restart count 0 Nov 22 03:10:22.520: INFO: Container install ready: false, restart count 0 Nov 22 03:10:22.520: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:10:22.520: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:22.520: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:22.520: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:10:22.520: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:10:22.520: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:22.520: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:10:22.520: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:10:22.520: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:22.520: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:22.520: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:22.520: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:22.520: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:22.520: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:22.520: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:22.520: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:22.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:22.520: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:22.520: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:10:22.520: INFO: rs-e2e-pts-filter-mwwdm from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:22.520: INFO: rs-e2e-pts-filter-rxb7c from sched-pred-7640 started at 2021-11-22 03:10:12 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 22 03:10:22.520: INFO: medium from sched-preemption-5128 started at 2021-11-22 03:09:57 +0000 UTC (1 container statuses recorded) Nov 22 03:10:22.520: INFO: Container medium ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-fb6898d2-76d1-4836-ad7f-c7ce304917ce 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-fb6898d2-76d1-4836-ad7f-c7ce304917ce off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-fb6898d2-76d1-4836-ad7f-c7ce304917ce [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:10:38.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7941" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.178 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":5,"skipped":2004,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:10:38.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:10:38.670: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:10:38.678: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:10:38.680: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:10:38.694: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:10:38.694: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:38.694: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:38.694: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:10:38.694: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:38.694: INFO: Container init ready: false, restart count 0 Nov 22 03:10:38.694: INFO: Container install ready: false, restart count 0 Nov 22 03:10:38.694: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.694: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:10:38.695: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.695: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:38.695: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.695: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:10:38.695: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.695: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:10:38.695: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.695: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:38.695: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.695: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:38.695: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.695: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:38.695: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:38.695: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:38.695: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:38.695: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:38.695: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:38.695: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:38.695: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:38.695: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:10:38.695: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:10:38.695: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:10:38.695: INFO: Container grafana ready: true, restart count 0 Nov 22 03:10:38.695: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:10:38.695: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:10:38.705: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:10:38.705: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:38.705: INFO: Container init ready: false, restart count 0 Nov 22 03:10:38.705: INFO: Container install ready: false, restart count 0 Nov 22 03:10:38.705: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:10:38.705: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:38.705: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:38.705: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:10:38.705: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:10:38.705: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:38.705: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:10:38.705: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:10:38.705: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:38.705: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:38.705: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:38.705: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:38.705: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:38.705: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:38.705: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:38.705: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:38.705: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:38.705: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:38.705: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:10:38.705: INFO: pod1 from sched-pred-7941 started at 2021-11-22 03:10:26 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container agnhost ready: true, restart count 0 Nov 22 03:10:38.705: INFO: pod2 from sched-pred-7941 started at 2021-11-22 03:10:30 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container agnhost ready: true, restart count 0 Nov 22 03:10:38.705: INFO: pod3 from sched-pred-7941 started at 2021-11-22 03:10:34 +0000 UTC (1 container statuses recorded) Nov 22 03:10:38.705: INFO: Container agnhost ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa22cab958d], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa26f5c4a93], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa45932cd61], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5316/filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa4ac7713b7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa4bed9524d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 308.421734ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa4c4a58ea7], Reason = [Created], Message = [Created container filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129] STEP: Considering event: Type = [Normal], Name = [filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129.16b9bfa4cb2021ae], Reason = [Started], Message = [Started container filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129] STEP: Considering event: Type = [Normal], Name = [without-label.16b9bfa13bea27b5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5316/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16b9bfa192b92441], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b9bfa1a47015b3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 297.196059ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b9bfa1aa6b383c], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b9bfa1b2610ae2], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b9bfa22beb4d90], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod42cb2092-533c-496b-8906-0fa51058ea10.16b9bfa4f9437aaf], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:10:55.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5316" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":6,"skipped":2409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:10:55.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:10:55.860: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:10:55.868: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:10:55.871: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:10:55.880: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:10:55.880: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:55.880: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:10:55.880: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:55.880: INFO: Container init ready: false, restart count 0 Nov 22 03:10:55.880: INFO: Container install ready: false, restart count 0 Nov 22 03:10:55.880: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:10:55.880: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:55.880: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:10:55.880: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:10:55.880: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:55.880: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:55.880: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.880: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:55.880: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:55.880: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:55.880: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:55.880: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:55.880: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:10:55.880: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container grafana ready: true, restart count 0 Nov 22 03:10:55.880: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:10:55.880: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:10:55.890: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:10:55.890: INFO: Container discover ready: false, restart count 0 Nov 22 03:10:55.890: INFO: Container init ready: false, restart count 0 Nov 22 03:10:55.890: INFO: Container install ready: false, restart count 0 Nov 22 03:10:55.890: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:10:55.890: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:10:55.890: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:10:55.890: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:10:55.890: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:10:55.890: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:10:55.890: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:10:55.890: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:10:55.890: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:10:55.890: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:10:55.890: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:10:55.890: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:10:55.890: INFO: Container collectd ready: true, restart count 0 Nov 22 03:10:55.890: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:10:55.890: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:10:55.890: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:10:55.890: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:10:55.890: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:10:55.890: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:10:55.890: INFO: filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129 from sched-pred-5316 started at 2021-11-22 03:10:52 +0000 UTC (1 container statuses recorded) Nov 22 03:10:55.890: INFO: Container filler-pod-dabaeb11-540d-4efa-ba4c-7081c4ef9129 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7006b78d-5a08-45aa-b655-b0f165e52ba5=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-046af5f5-746c-46d0-8529-2cbb2a9504b4 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-046af5f5-746c-46d0-8529-2cbb2a9504b4 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-046af5f5-746c-46d0-8529-2cbb2a9504b4 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7006b78d-5a08-45aa-b655-b0f165e52ba5=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:11:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6263" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:12.162 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":7,"skipped":3304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:11:08.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:11:08.025: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:11:08.034: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:11:08.037: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:11:08.045: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:11:08.045: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:11:08.045: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:11:08.045: INFO: Container discover ready: false, restart count 0 Nov 22 03:11:08.045: INFO: Container init ready: false, restart count 0 Nov 22 03:11:08.045: INFO: Container install ready: false, restart count 0 Nov 22 03:11:08.045: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:11:08.045: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:11:08.045: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:11:08.045: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:11:08.045: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:11:08.045: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:11:08.045: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.045: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:11:08.045: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:11:08.045: INFO: Container collectd ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:11:08.045: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:11:08.045: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:11:08.045: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:11:08.045: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container grafana ready: true, restart count 0 Nov 22 03:11:08.045: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:11:08.045: INFO: with-tolerations from sched-pred-6263 started at 2021-11-22 03:10:59 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.046: INFO: Container with-tolerations ready: true, restart count 0 Nov 22 03:11:08.046: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:11:08.055: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:11:08.056: INFO: Container discover ready: false, restart count 0 Nov 22 03:11:08.056: INFO: Container init ready: false, restart count 0 Nov 22 03:11:08.056: INFO: Container install ready: false, restart count 0 Nov 22 03:11:08.056: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:11:08.056: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:11:08.056: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:11:08.056: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:11:08.056: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:11:08.056: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:11:08.056: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:11:08.056: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:11:08.056: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:11:08.056: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:11:08.056: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:11:08.056: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:11:08.056: INFO: Container collectd ready: true, restart count 0 Nov 22 03:11:08.056: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:11:08.056: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:11:08.056: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:11:08.056: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:11:08.056: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:11:08.056: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:11:08.056: INFO: Container tas-extender ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Nov 22 03:11:08.091: INFO: Pod cmk-7wvgm requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod cmk-prx26 requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod cmk-webhook-6c9d5f8578-8fxd8 requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod kube-flannel-cfzcv requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod kube-flannel-rdjt7 requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod kube-multus-ds-amd64-6bg2m requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod kube-multus-ds-amd64-wcr4n requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod kube-proxy-5xb56 requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod kube-proxy-mb5cq requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod kubernetes-dashboard-785dcbb76d-wrkrj requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod kubernetes-metrics-scraper-5558854cb-kzhf7 requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod node-feature-discovery-worker-lkpb8 requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod node-feature-discovery-worker-slrp4 requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod collectd-6t47m requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod collectd-zmh78 requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod node-exporter-jj5rx requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod node-exporter-r2vkb requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-q64pf requesting local ephemeral resource =0 on Node node2 Nov 22 03:11:08.091: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node node1 Nov 22 03:11:08.091: INFO: Using pod capacity: 40542413347 Nov 22 03:11:08.091: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 Nov 22 03:11:08.091: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Nov 22 03:11:08.278: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b9bfa812747aba], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b9bfa9593cc9f6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b9bfa96c22629b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 317.026155ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b9bfa996f320af], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b9bfa9e0461669], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b9bfa812915e02], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b9bfa99af4a81c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b9bfa9ad224a4e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 304.973735ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b9bfa9bdae1617], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b9bfaa0b7b31bd], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b9bfa8179ab61e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b9bfaa16d6d495], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b9bfaa3ded24f8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 655.76312ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b9bfaa447ea9df], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b9bfaa4acb56a3], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b9bfa8182f3e06], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b9bfa92fdec7c2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b9bfa955f48edc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 638.950481ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b9bfa95cb3d16d], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b9bfa98975b73f], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b9bfa818b323a2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b9bfa9b6be2574], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b9bfa9c9e55851], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 321.330286ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b9bfa9dd7b58d1], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b9bfaa2be00729], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b9bfa8194b9482], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b9bfa8b68d8def], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b9bfa8d9b17ab9], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 589.547774ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b9bfa8f94b9f00], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b9bfa960e68e62], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b9bfa819e157c3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b9bfaa32f6bb62], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b9bfaa50615acf], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 493.521116ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b9bfaa56cc1a6f], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b9bfaa5d8e603b], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b9bfa81a640286], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-15 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b9bfaa33b74fc1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b9bfaa64cb3af5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 823.383323ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b9bfaa6ad29670], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b9bfaa71693032], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b9bfa81af0b1d8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b9bfaa10948b6c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b9bfaa22408309], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 296.478438ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b9bfaa35ac0dd1], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b9bfaa42bdf6d1], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b9bfa81b740277], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b9bfaa26a35acc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b9bfaa48798f35], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 567.679827ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b9bfaa4f7251c4], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b9bfaa56f07ea4], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b9bfa81bffbf78], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b9bfa9f0aebd95], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b9bfaa108db3fe], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 534.697611ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b9bfaa307a8039], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b9bfaa41822c13], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b9bfa81c844fd7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b9bfaa2c439100], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b9bfaa5c5c063c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 806.899479ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b9bfaa627d381b], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b9bfaa6995fbe3], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b9bfa813196ed3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b9bfa95b4192aa], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b9bfa97e6c906c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 590.011921ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b9bfa9a4028378], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b9bfa9e2fe55ac], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b9bfa8139bcc74], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b9bfa9e56bc76f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b9bfa9ff11eb52], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 430.312841ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b9bfaa1b89e96a], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b9bfaa3e91fc7f], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b9bfa8142bb730], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b9bfaa10a77163], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b9bfaa363e61cf], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 630.644514ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b9bfaa403862c5], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b9bfaa48f1fe09], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b9bfa814c26404], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b9bfa9f5efc177], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b9bfaa188a7fbd], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 580.561627ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b9bfaa34b2e443], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b9bfaa3cfafe7a], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b9bfa815604e14], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b9bfaa143ba020], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b9bfaa2aabda0a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 376.448664ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b9bfaa37b5c30e], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b9bfaa3faf2e22], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b9bfa815eda148], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-7 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b9bfa8f122670b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b9bfa905048dee], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 333.579896ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b9bfa91ff51052], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b9bfa97aa440d2], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b9bfa81684fab8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b9bfa8ad1e1f6e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b9bfa8bf696b8b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 306.915492ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b9bfa91ad409d0], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b9bfa956ed2146], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b9bfa81705a781], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4273/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b9bfa9f19f50ef], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b9bfaa0714f614], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 360.025974ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b9bfaa2b6b776d], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b9bfaa392ad82c], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b9bfab9ea47669], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:11:24.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4273" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.363 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":8,"skipped":3423,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:11:24.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 22 03:11:24.394: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 03:12:24.446: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:12:24.448: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 03:12:24.467: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 03:12:24.467: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 03:12:24.467: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 03:12:24.467: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 03:12:24.493: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:12:24.493: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:12:24.493: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.493: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:24.494: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:12:24.494: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Nov 22 03:12:32.593: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:12:32.593: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 22 03:12:32.593: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:12:32.593: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:12:32.593: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:12:32.605: INFO: Waiting for running... Nov 22 03:12:32.609: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:12:37.675: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.675: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 22 03:12:37.676: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:12:37.676: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Pod for on the node: fb55c511-a01f-4f75-b5df-837d069ee0a2-0, Cpu: 38400, Mem: 89350041600 Nov 22 03:12:37.676: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 22 03:12:37.676: INFO: Node: node1, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884628480, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:12:51.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7359" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:87.401 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":9,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:12:51.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:12:51.799: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:12:51.808: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:12:51.810: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:12:51.820: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:12:51.820: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:12:51.820: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:12:51.820: INFO: Container discover ready: false, restart count 0 Nov 22 03:12:51.820: INFO: Container init ready: false, restart count 0 Nov 22 03:12:51.820: INFO: Container install ready: false, restart count 0 Nov 22 03:12:51.820: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:12:51.820: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:12:51.820: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:12:51.820: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:12:51.820: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:12:51.820: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:12:51.820: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:12:51.820: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:12:51.820: INFO: Container collectd ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:12:51.820: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:12:51.820: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:12:51.820: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:12:51.820: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container grafana ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:12:51.820: INFO: test-pod from sched-priority-7359 started at 2021-11-22 03:12:43 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.820: INFO: Container test-pod ready: true, restart count 0 Nov 22 03:12:51.820: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:12:51.828: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:12:51.828: INFO: Container discover ready: false, restart count 0 Nov 22 03:12:51.828: INFO: Container init ready: false, restart count 0 Nov 22 03:12:51.828: INFO: Container install ready: false, restart count 0 Nov 22 03:12:51.828: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:12:51.828: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:12:51.828: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:12:51.828: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:12:51.828: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:12:51.828: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:12:51.828: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:12:51.828: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:12:51.828: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:12:51.828: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:12:51.828: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:12:51.828: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:12:51.828: INFO: Container collectd ready: true, restart count 0 Nov 22 03:12:51.828: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:12:51.828: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:12:51.828: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:12:51.828: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:12:51.828: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:12:51.828: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:12:51.828: INFO: rs-e2e-pts-score-bkj2z from sched-priority-7359 started at 2021-11-22 03:12:37 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container e2e-pts-score ready: true, restart count 0 Nov 22 03:12:51.828: INFO: rs-e2e-pts-score-kk9gr from sched-priority-7359 started at 2021-11-22 03:12:37 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container e2e-pts-score ready: true, restart count 0 Nov 22 03:12:51.828: INFO: rs-e2e-pts-score-qbtdr from sched-priority-7359 started at 2021-11-22 03:12:37 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container e2e-pts-score ready: true, restart count 0 Nov 22 03:12:51.828: INFO: rs-e2e-pts-score-xtlsm from sched-priority-7359 started at 2021-11-22 03:12:37 +0000 UTC (1 container statuses recorded) Nov 22 03:12:51.828: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-325ffab4-4bcc-4cf0-bc57-594e158145e3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-325ffab4-4bcc-4cf0-bc57-594e158145e3 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-325ffab4-4bcc-4cf0-bc57-594e158145e3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:13:01.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2602" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.163 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":10,"skipped":3668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:13:01.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 22 03:13:01.975: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 03:14:02.028: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:14:02.030: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 03:14:02.047: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 03:14:02.047: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 03:14:02.047: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 03:14:02.047: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 03:14:02.063: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.063: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:14:02.064: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:14:02.064: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:14:02.064: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:14:02.064: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Nov 22 03:14:06.110: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:14:06.110: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:14:06.110: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:06.111: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:14:06.111: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 22 03:14:06.156: INFO: Waiting for running... Nov 22 03:14:06.159: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:14:11.229: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:14:11.229: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:14:11.229: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 22 03:14:11.229: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:14:11.229: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:14:25.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9535" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:83.323 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":11,"skipped":4742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:14:25.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 22 03:14:25.298: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 03:14:25.307: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:14:25.309: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 22 03:14:25.317: INFO: cmk-7wvgm from kube-system started at 2021-11-21 22:38:17 +0000 UTC (2 container statuses recorded) Nov 22 03:14:25.317: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:14:25.317: INFO: cmk-init-discover-node1-brwt6 from kube-system started at 2021-11-21 22:37:36 +0000 UTC (3 container statuses recorded) Nov 22 03:14:25.317: INFO: Container discover ready: false, restart count 0 Nov 22 03:14:25.317: INFO: Container init ready: false, restart count 0 Nov 22 03:14:25.317: INFO: Container install ready: false, restart count 0 Nov 22 03:14:25.317: INFO: kube-flannel-cfzcv from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container kube-flannel ready: true, restart count 1 Nov 22 03:14:25.317: INFO: kube-multus-ds-amd64-wcr4n from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:14:25.317: INFO: kube-proxy-mb5cq from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container kube-proxy ready: true, restart count 1 Nov 22 03:14:25.317: INFO: kubernetes-metrics-scraper-5558854cb-kzhf7 from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container kubernetes-metrics-scraper ready: true, restart count 2 Nov 22 03:14:25.317: INFO: nginx-proxy-node1 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:14:25.317: INFO: node-feature-discovery-worker-lkpb8 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:14:25.317: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9xds6 from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:14:25.317: INFO: collectd-zmh78 from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:14:25.317: INFO: Container collectd ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:14:25.317: INFO: node-exporter-jj5rx from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:14:25.317: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:14:25.317: INFO: prometheus-k8s-0 from monitoring started at 2021-11-21 22:39:32 +0000 UTC (4 container statuses recorded) Nov 22 03:14:25.317: INFO: Container config-reloader ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container grafana ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Container prometheus ready: true, restart count 1 Nov 22 03:14:25.317: INFO: pod-with-pod-antiaffinity from sched-priority-9535 started at 2021-11-22 03:14:11 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.317: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Nov 22 03:14:25.317: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 22 03:14:25.327: INFO: cmk-init-discover-node2-8jdqf from kube-system started at 2021-11-21 22:37:56 +0000 UTC (3 container statuses recorded) Nov 22 03:14:25.327: INFO: Container discover ready: false, restart count 0 Nov 22 03:14:25.327: INFO: Container init ready: false, restart count 0 Nov 22 03:14:25.327: INFO: Container install ready: false, restart count 0 Nov 22 03:14:25.327: INFO: cmk-prx26 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (2 container statuses recorded) Nov 22 03:14:25.327: INFO: Container nodereport ready: true, restart count 0 Nov 22 03:14:25.327: INFO: Container reconcile ready: true, restart count 0 Nov 22 03:14:25.327: INFO: cmk-webhook-6c9d5f8578-8fxd8 from kube-system started at 2021-11-21 22:38:18 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container cmk-webhook ready: true, restart count 0 Nov 22 03:14:25.327: INFO: kube-flannel-rdjt7 from kube-system started at 2021-11-21 22:26:48 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container kube-flannel ready: true, restart count 2 Nov 22 03:14:25.327: INFO: kube-multus-ds-amd64-6bg2m from kube-system started at 2021-11-21 22:26:58 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container kube-multus ready: true, restart count 1 Nov 22 03:14:25.327: INFO: kube-proxy-5xb56 from kube-system started at 2021-11-21 22:25:53 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container kube-proxy ready: true, restart count 2 Nov 22 03:14:25.327: INFO: kubernetes-dashboard-785dcbb76d-wrkrj from kube-system started at 2021-11-21 22:27:27 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 22 03:14:25.327: INFO: nginx-proxy-node2 from kube-system started at 2021-11-21 22:25:50 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container nginx-proxy ready: true, restart count 2 Nov 22 03:14:25.327: INFO: node-feature-discovery-worker-slrp4 from kube-system started at 2021-11-21 22:34:07 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container nfd-worker ready: true, restart count 0 Nov 22 03:14:25.327: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9jdcq from kube-system started at 2021-11-21 22:35:19 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 22 03:14:25.327: INFO: collectd-6t47m from monitoring started at 2021-11-21 22:43:10 +0000 UTC (3 container statuses recorded) Nov 22 03:14:25.327: INFO: Container collectd ready: true, restart count 0 Nov 22 03:14:25.327: INFO: Container collectd-exporter ready: true, restart count 0 Nov 22 03:14:25.327: INFO: Container rbac-proxy ready: true, restart count 0 Nov 22 03:14:25.327: INFO: node-exporter-r2vkb from monitoring started at 2021-11-21 22:39:21 +0000 UTC (2 container statuses recorded) Nov 22 03:14:25.327: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 22 03:14:25.327: INFO: Container node-exporter ready: true, restart count 0 Nov 22 03:14:25.327: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q64pf from monitoring started at 2021-11-21 22:42:22 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container tas-extender ready: true, restart count 0 Nov 22 03:14:25.327: INFO: pod-with-label-security-s1 from sched-priority-9535 started at 2021-11-22 03:14:02 +0000 UTC (1 container statuses recorded) Nov 22 03:14:25.327: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b9bfd5ff65aaf0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:14:26.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1248" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":12,"skipped":4770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 22 03:14:26.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 22 03:14:26.407: INFO: Waiting up to 1m0s for all nodes to be ready Nov 22 03:15:26.479: INFO: Waiting for terminating namespaces to be deleted... Nov 22 03:15:26.481: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 03:15:26.500: INFO: The status of Pod cmk-init-discover-node1-brwt6 is Succeeded, skipping waiting Nov 22 03:15:26.500: INFO: The status of Pod cmk-init-discover-node2-8jdqf is Succeeded, skipping waiting Nov 22 03:15:26.500: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 03:15:26.500: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 22 03:15:26.522: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:15:26.522: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:15:26.522: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.522: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:15:26.522: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Nov 22 03:15:26.538: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:15:26.538: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 22 03:15:26.538: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q64pf, Cpu: 100, Mem: 209715200 Nov 22 03:15:26.538: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 22 03:15:26.538: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 22 03:15:26.552: INFO: Waiting for running... Nov 22 03:15:26.555: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:15:31.624: INFO: ComputeCPUMemFraction for node: node1 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.624: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 22 03:15:31.625: INFO: Node: node1, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 22 03:15:31.625: INFO: ComputeCPUMemFraction for node: node2 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Pod for on the node: 9c7ac6db-4fd2-4fe7-889f-2e35e5edd6a4-0, Cpu: 38400, Mem: 89350039552 Nov 22 03:15:31.625: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 22 03:15:31.625: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884632576, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-984 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-984, will wait for the garbage collector to delete the pods Nov 22 03:15:37.809: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.47637ms Nov 22 03:15:37.909: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.3095ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 22 03:15:53.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-984" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:87.255 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":13,"skipped":5591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 22 03:15:53.642: INFO: Running AfterSuite actions on all nodes Nov 22 03:15:53.642: INFO: Running AfterSuite actions on node 1 Nov 22 03:15:53.642: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 524.654 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 8m45.938535988s Test Suite Passed