I1113 05:08:45.630355 23 e2e.go:129] Starting e2e run "902d2d54-1faa-4eac-8bc5-b1e0e011cc72" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636780124 - Will randomize all specs Will run 13 of 5770 specs Nov 13 05:08:45.645: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:08:45.650: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 05:08:45.680: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:08:45.750: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:08:45.750: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:08:45.750: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:08:45.750: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:08:45.750: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 05:08:45.767: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 05:08:45.767: INFO: e2e test version: v1.21.5 Nov 13 05:08:45.768: INFO: kube-apiserver version: v1.21.1 Nov 13 05:08:45.768: INFO: >>> kubeConfig: /root/.kube/config Nov 13 05:08:45.774: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:08:45.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority W1113 05:08:45.808130 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 05:08:45.808: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 05:08:45.811: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 05:08:45.814: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:09:45.868: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:09:45.870: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:09:45.891: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:09:45.891: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:09:45.891: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:09:45.891: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:09:45.906: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:09:45.906: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:09:45.906: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:09:45.906: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:09:45.906: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Nov 13 05:09:49.947: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.947: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:09:49.948: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:09:49.948: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:49.948: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:09:49.948: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 05:09:49.959: INFO: Waiting for running... Nov 13 05:09:49.963: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:09:55.037: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:09:55.037: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.037: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.037: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.037: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.037: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:09:55.038: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:09:55.038: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 05:09:55.038: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:09:55.038: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:10:05.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-838" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:79.305 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":1,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:10:05.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 05:10:05.113: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:11:05.165: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:11:05.168: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:11:05.189: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:11:05.189: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:11:05.189: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:11:05.189: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:11:05.209: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:11:05.209: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:11:05.209: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.209: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:11:05.209: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Nov 13 05:11:05.226: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:11:05.226: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:11:05.226: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:11:05.226: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:11:05.226: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 05:11:05.242: INFO: Waiting for running... Nov 13 05:11:05.243: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:11:10.312: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:11:10.312: INFO: Node: node1, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:11:10.312: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Pod for on the node: afd0b908-1146-46e2-87ae-cb03cbd835d5-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:11:10.312: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:11:10.312: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-2009 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-2009, will wait for the garbage collector to delete the pods Nov 13 05:11:16.490: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.131831ms Nov 13 05:11:16.591: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.983519ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:11:32.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2009" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:87.240 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":2,"skipped":644,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:11:32.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 05:11:32.353: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:12:32.422: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:12:32.424: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:12:32.446: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:12:32.446: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:12:32.446: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:12:32.446: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:12:32.463: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:12:32.463: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:12:32.463: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.463: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:12:32.463: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Nov 13 05:12:32.478: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:12:32.478: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:12:32.478: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:12:32.478: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:12:32.478: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 05:12:32.494: INFO: Waiting for running... Nov 13 05:12:32.495: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:12:37.563: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:12:37.563: INFO: Node: node1, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:12:37.563: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.563: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.564: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.564: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.564: INFO: Pod for on the node: 628874df-007e-4ee5-a870-8883ddc7cfc0-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:12:37.564: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:12:37.564: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f0cb7f24-cd82-44b9-a63e=testing-taint-value-7adcca89-c9b7-4796-a5c7-f60fcb576789:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-81b15452-82ca-4534-855b=testing-taint-value-e7c63b5d-eedc-4264-b3c7-530c473d8217:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4d97bbc9-d7a5-496e-8170=testing-taint-value-9c262cbd-4742-4865-8fd2-3caa24c99fa6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-78ddd766-fabd-4348-b017=testing-taint-value-81b002c2-9659-48bf-9752-c281df5b5733:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3d71f242-91e7-4c31-a4f5=testing-taint-value-9366d303-7d8a-4668-bff3-483796891f61:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b9444bd8-fcd8-45fb-a51b=testing-taint-value-58b9ea09-4ef8-4d17-933d-5871491bd3ba:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2c1e018d-d61c-487c-83e1=testing-taint-value-44402ef8-3877-4a17-8e2a-eed221e30088:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2555a970-3d02-4a19-a2b2=testing-taint-value-6fdffb77-a630-4082-bd18-4354da0ff8e7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-fff912e2-37f7-4f1d-a1d0=testing-taint-value-423c97da-4899-4bf1-b836-26d61f1705d4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1559c71b-d065-44f9-b40c=testing-taint-value-cd780647-92a4-4dee-8397-f441e3fc488b:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-03b833d1-5e02-4e45-b4a0=testing-taint-value-bea9562e-6e7f-457c-8de8-3390d397e44e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0a8da0ab-ffb4-4e52-8d15=testing-taint-value-2ff8dbfa-63af-490b-ab15-e315aee794c2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-12591239-4011-4f2e-af2b=testing-taint-value-2e841810-f57e-4523-a447-1663f9b06894:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-18242083-7cc4-45cc-b663=testing-taint-value-b7435ee8-0cdb-45fc-b1e2-d11984858230:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-7de350c2-8fbb-43bd-91a0=testing-taint-value-e0ba7823-889a-488e-a8f6-3cbaf575ad24:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-63950416-b307-42c8-8765=testing-taint-value-d0d70eb5-5645-48ee-a8a3-d04b16a1a13d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-580fbb1d-5092-4d83-a713=testing-taint-value-198f790a-44e8-43c8-a0f3-eed30883a117:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-84ea9767-ba77-43e9-b13a=testing-taint-value-08e98c3e-cb08-4ad1-a764-54576f009e96:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c39a085a-3bbd-4f86-b462=testing-taint-value-266deb0a-2cec-48da-bc53-58f328e3a8a4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-d4ceabb6-2899-49a3-8c15=testing-taint-value-24c4c8a3-e1f7-4889-aaea-fdaf9589180e:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-03b833d1-5e02-4e45-b4a0=testing-taint-value-bea9562e-6e7f-457c-8de8-3390d397e44e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0a8da0ab-ffb4-4e52-8d15=testing-taint-value-2ff8dbfa-63af-490b-ab15-e315aee794c2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-12591239-4011-4f2e-af2b=testing-taint-value-2e841810-f57e-4523-a447-1663f9b06894:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-18242083-7cc4-45cc-b663=testing-taint-value-b7435ee8-0cdb-45fc-b1e2-d11984858230:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-7de350c2-8fbb-43bd-91a0=testing-taint-value-e0ba7823-889a-488e-a8f6-3cbaf575ad24:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-63950416-b307-42c8-8765=testing-taint-value-d0d70eb5-5645-48ee-a8a3-d04b16a1a13d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-580fbb1d-5092-4d83-a713=testing-taint-value-198f790a-44e8-43c8-a0f3-eed30883a117:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-84ea9767-ba77-43e9-b13a=testing-taint-value-08e98c3e-cb08-4ad1-a764-54576f009e96:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c39a085a-3bbd-4f86-b462=testing-taint-value-266deb0a-2cec-48da-bc53-58f328e3a8a4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-d4ceabb6-2899-49a3-8c15=testing-taint-value-24c4c8a3-e1f7-4889-aaea-fdaf9589180e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f0cb7f24-cd82-44b9-a63e=testing-taint-value-7adcca89-c9b7-4796-a5c7-f60fcb576789:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-81b15452-82ca-4534-855b=testing-taint-value-e7c63b5d-eedc-4264-b3c7-530c473d8217:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4d97bbc9-d7a5-496e-8170=testing-taint-value-9c262cbd-4742-4865-8fd2-3caa24c99fa6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-78ddd766-fabd-4348-b017=testing-taint-value-81b002c2-9659-48bf-9752-c281df5b5733:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3d71f242-91e7-4c31-a4f5=testing-taint-value-9366d303-7d8a-4668-bff3-483796891f61:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b9444bd8-fcd8-45fb-a51b=testing-taint-value-58b9ea09-4ef8-4d17-933d-5871491bd3ba:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2c1e018d-d61c-487c-83e1=testing-taint-value-44402ef8-3877-4a17-8e2a-eed221e30088:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2555a970-3d02-4a19-a2b2=testing-taint-value-6fdffb77-a630-4082-bd18-4354da0ff8e7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-fff912e2-37f7-4f1d-a1d0=testing-taint-value-423c97da-4899-4bf1-b836-26d61f1705d4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1559c71b-d065-44f9-b40c=testing-taint-value-cd780647-92a4-4dee-8397-f441e3fc488b:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:12:52.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3968" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:80.595 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":3,"skipped":814,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:12:52.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:12:52.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:12:52.963: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:12:52.966: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:12:52.976: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:12:52.976: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:12:52.976: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:12:52.976: INFO: Container discover ready: false, restart count 0 Nov 13 05:12:52.976: INFO: Container init ready: false, restart count 0 Nov 13 05:12:52.976: INFO: Container install ready: false, restart count 0 Nov 13 05:12:52.976: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:12:52.976: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:12:52.976: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:12:52.976: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:12:52.976: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:12:52.976: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:12:52.976: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:12:52.976: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:12:52.976: INFO: Container collectd ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:12:52.976: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:12:52.976: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:12:52.976: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:12:52.976: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container grafana ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:12:52.976: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:12:52.976: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:12:52.976: INFO: with-tolerations from sched-priority-3968 started at 2021-11-13 05:12:38 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.976: INFO: Container with-tolerations ready: true, restart count 0 Nov 13 05:12:52.976: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:12:52.983: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:12:52.983: INFO: Container discover ready: false, restart count 0 Nov 13 05:12:52.984: INFO: Container init ready: false, restart count 0 Nov 13 05:12:52.984: INFO: Container install ready: false, restart count 0 Nov 13 05:12:52.984: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:12:52.984: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:12:52.984: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:12:52.984: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:12:52.984: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:12:52.984: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:12:52.984: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:12:52.984: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:12:52.984: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:12:52.984: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:12:52.984: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:12:52.984: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:12:52.984: INFO: Container collectd ready: true, restart count 0 Nov 13 05:12:52.984: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:12:52.984: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:12:52.984: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:12:52.984: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:12:52.984: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:12:52.984: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:12:52.984: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b70313c50b36c5], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:12:54.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6528" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":4,"skipped":1179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:12:54.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:12:54.063: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:12:54.071: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:12:54.073: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:12:54.084: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:12:54.084: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:12:54.084: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:12:54.084: INFO: Container discover ready: false, restart count 0 Nov 13 05:12:54.084: INFO: Container init ready: false, restart count 0 Nov 13 05:12:54.084: INFO: Container install ready: false, restart count 0 Nov 13 05:12:54.084: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:12:54.084: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:12:54.084: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:12:54.084: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:12:54.084: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:12:54.084: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:12:54.084: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:12:54.084: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:12:54.084: INFO: Container collectd ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:12:54.084: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:12:54.084: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:12:54.084: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:12:54.084: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container grafana ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:12:54.084: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:12:54.084: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:12:54.084: INFO: with-tolerations from sched-priority-3968 started at 2021-11-13 05:12:38 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.084: INFO: Container with-tolerations ready: true, restart count 0 Nov 13 05:12:54.084: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:12:54.094: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:12:54.094: INFO: Container discover ready: false, restart count 0 Nov 13 05:12:54.094: INFO: Container init ready: false, restart count 0 Nov 13 05:12:54.094: INFO: Container install ready: false, restart count 0 Nov 13 05:12:54.094: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:12:54.094: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:12:54.094: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:12:54.094: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:12:54.094: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:12:54.094: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:12:54.094: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:12:54.094: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:12:54.094: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:12:54.094: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:12:54.094: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:12:54.094: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:12:54.094: INFO: Container collectd ready: true, restart count 0 Nov 13 05:12:54.094: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:12:54.094: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:12:54.094: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:12:54.094: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:12:54.094: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:12:54.094: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:12:54.094: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-961d28eb-64d8-4a92-b6e0-f7f54e397331=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-468824ed-b7b8-4efb-9652-3d1c26ca17e0 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70314060008ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3745/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70314628482df], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70314765fee12], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 333.139899ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b703147e2a27fa], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b7031486166638], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b703156cfcef94], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b703156f1bc878], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-961d28eb-64d8-4a92-b6e0-f7f54e397331: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b703156f1bc878], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-961d28eb-64d8-4a92-b6e0-f7f54e397331: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70314060008ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3745/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70314628482df], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70314765fee12], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 333.139899ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b703147e2a27fa], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b7031486166638], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b703156cfcef94], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-961d28eb-64d8-4a92-b6e0-f7f54e397331=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b70315b643ee72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3745/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-468824ed-b7b8-4efb-9652-3d1c26ca17e0 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-468824ed-b7b8-4efb-9652-3d1c26ca17e0 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-961d28eb-64d8-4a92-b6e0-f7f54e397331=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:13:02.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3745" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":5,"skipped":1518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:13:02.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:13:02.245: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:13:02.254: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:13:02.256: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:13:02.277: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:13:02.277: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:13:02.277: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:13:02.277: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:13:02.277: INFO: Container discover ready: false, restart count 0 Nov 13 05:13:02.277: INFO: Container init ready: false, restart count 0 Nov 13 05:13:02.277: INFO: Container install ready: false, restart count 0 Nov 13 05:13:02.277: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.277: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:13:02.277: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.277: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:13:02.278: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.278: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:13:02.278: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.278: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:13:02.278: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.278: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:13:02.278: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.278: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:13:02.278: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.278: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:13:02.278: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:13:02.278: INFO: Container collectd ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:13:02.278: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:13:02.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:13:02.278: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:13:02.278: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container grafana ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:13:02.278: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:13:02.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:13:02.278: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:13:02.278: INFO: with-tolerations from sched-priority-3968 started at 2021-11-13 05:12:38 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.278: INFO: Container with-tolerations ready: false, restart count 0 Nov 13 05:13:02.278: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:13:02.291: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:13:02.291: INFO: Container discover ready: false, restart count 0 Nov 13 05:13:02.291: INFO: Container init ready: false, restart count 0 Nov 13 05:13:02.291: INFO: Container install ready: false, restart count 0 Nov 13 05:13:02.291: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:13:02.291: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:13:02.291: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:13:02.291: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:13:02.291: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:13:02.291: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:13:02.291: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:13:02.291: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:13:02.291: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:13:02.291: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:13:02.291: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:13:02.291: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:13:02.291: INFO: Container collectd ready: true, restart count 0 Nov 13 05:13:02.291: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:13:02.291: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:13:02.291: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:13:02.291: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:13:02.291: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:13:02.291: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:13:02.291: INFO: still-no-tolerations from sched-pred-3745 started at 2021-11-13 05:13:01 +0000 UTC (1 container statuses recorded) Nov 13 05:13:02.291: INFO: Container still-no-tolerations ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d.16b70316e00f1bf4], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d.16b70318bda90ffd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5668/filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d.16b7031912f1967a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d.16b7031926998967], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 329.765129ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d.16b703192cbc6039], Reason = [Created], Message = [Created container filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d] STEP: Considering event: Type = [Normal], Name = [filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d.16b7031933ccdab4], Reason = [Started], Message = [Started container filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d] STEP: Considering event: Type = [Normal], Name = [without-label.16b70315f0ddd124], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5668/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16b703165ebea41f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b70316721a75fb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 324.772111ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b703167a829ca3], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b703168292bfc8], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b70316df57d927], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pode48f0f78-98a9-424f-8d1c-0a9aa1fbb504.16b70319ace5cf38], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:13:19.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5668" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:17.189 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":6,"skipped":1745,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:13:19.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:13:19.442: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:13:19.451: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:13:19.457: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:13:19.480: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:13:19.480: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:13:19.480: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:13:19.480: INFO: Container discover ready: false, restart count 0 Nov 13 05:13:19.480: INFO: Container init ready: false, restart count 0 Nov 13 05:13:19.480: INFO: Container install ready: false, restart count 0 Nov 13 05:13:19.480: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:13:19.480: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:13:19.480: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:13:19.480: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:13:19.480: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:13:19.480: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:13:19.480: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:13:19.480: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:13:19.480: INFO: Container collectd ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:13:19.480: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:13:19.480: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:13:19.480: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:13:19.480: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container grafana ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:13:19.480: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:13:19.480: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:13:19.480: INFO: filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d from sched-pred-5668 started at 2021-11-13 05:13:14 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.480: INFO: Container filler-pod-17d4f1fa-ee03-4735-b9ab-074eda64f33d ready: true, restart count 0 Nov 13 05:13:19.480: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:13:19.489: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:13:19.489: INFO: Container discover ready: false, restart count 0 Nov 13 05:13:19.489: INFO: Container init ready: false, restart count 0 Nov 13 05:13:19.489: INFO: Container install ready: false, restart count 0 Nov 13 05:13:19.489: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:13:19.489: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:13:19.489: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:13:19.489: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:13:19.489: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:13:19.489: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:13:19.489: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:13:19.489: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:13:19.489: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:13:19.489: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:13:19.489: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:13:19.489: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:13:19.489: INFO: Container collectd ready: true, restart count 0 Nov 13 05:13:19.489: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:13:19.489: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:13:19.489: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:13:19.489: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:13:19.489: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:13:19.489: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:13:19.489: INFO: Container tas-extender ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7e8b37c2-36a6-4543-9443-b2ffa586fb97=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-154a51d9-f76c-4306-a479-a1f580d91964 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-154a51d9-f76c-4306-a479-a1f580d91964 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-154a51d9-f76c-4306-a479-a1f580d91964 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7e8b37c2-36a6-4543-9443-b2ffa586fb97=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:13:27.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9259" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.191 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":7,"skipped":2354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:13:27.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 13 05:13:27.643: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:14:27.701: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:15:03.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5116" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.379 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":8,"skipped":2420,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:15:03.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:15:04.020: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:15:04.030: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:15:04.042: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:15:04.054: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:15:04.054: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:04.054: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:15:04.054: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:04.054: INFO: Container init ready: false, restart count 0 Nov 13 05:15:04.054: INFO: Container install ready: false, restart count 0 Nov 13 05:15:04.054: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:15:04.054: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:15:04.054: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:04.054: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:15:04.054: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:04.054: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:04.054: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:04.054: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:04.054: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:04.054: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:04.054: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:04.054: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:15:04.054: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container grafana ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:15:04.054: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:15:04.054: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:15:04.054: INFO: low-1 from sched-preemption-5116 started at 2021-11-13 05:14:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container low-1 ready: true, restart count 0 Nov 13 05:15:04.054: INFO: medium from sched-preemption-5116 started at 2021-11-13 05:15:01 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.054: INFO: Container medium ready: true, restart count 0 Nov 13 05:15:04.054: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:15:04.064: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:15:04.064: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:04.064: INFO: Container init ready: false, restart count 0 Nov 13 05:15:04.064: INFO: Container install ready: false, restart count 0 Nov 13 05:15:04.064: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:15:04.064: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:04.064: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:04.064: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.064: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:15:04.064: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.064: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:04.064: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.064: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:15:04.064: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.064: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:15:04.064: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.064: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:15:04.064: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.064: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:04.064: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.065: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:04.065: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.065: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:04.065: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:04.065: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:04.065: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:04.065: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:04.065: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:04.065: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:04.065: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:04.065: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.065: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:15:04.065: INFO: high from sched-preemption-5116 started at 2021-11-13 05:14:35 +0000 UTC (1 container statuses recorded) Nov 13 05:15:04.065: INFO: Container high ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a51a09d5-55e9-455f-9432-d248bb05d6c1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.207 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.207 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-a51a09d5-55e9-455f-9432-d248bb05d6c1 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-a51a09d5-55e9-455f-9432-d248bb05d6c1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:15:22.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1556" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.197 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":9,"skipped":2609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:15:22.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:15:22.234: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:15:22.242: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:15:22.246: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:15:22.256: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:15:22.256: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:22.256: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:15:22.256: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:22.256: INFO: Container init ready: false, restart count 0 Nov 13 05:15:22.256: INFO: Container install ready: false, restart count 0 Nov 13 05:15:22.256: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:15:22.256: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:15:22.256: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:22.256: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:15:22.256: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:22.256: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:22.256: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:22.256: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:22.256: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:22.256: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:22.256: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:22.256: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:15:22.256: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container grafana ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:15:22.256: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:15:22.256: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:15:22.256: INFO: pod1 from sched-pred-1556 started at 2021-11-13 05:15:08 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container agnhost ready: true, restart count 0 Nov 13 05:15:22.256: INFO: pod2 from sched-pred-1556 started at 2021-11-13 05:15:12 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container agnhost ready: true, restart count 0 Nov 13 05:15:22.256: INFO: pod3 from sched-pred-1556 started at 2021-11-13 05:15:18 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.256: INFO: Container agnhost ready: true, restart count 0 Nov 13 05:15:22.256: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:15:22.278: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:15:22.278: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:22.278: INFO: Container init ready: false, restart count 0 Nov 13 05:15:22.278: INFO: Container install ready: false, restart count 0 Nov 13 05:15:22.278: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:15:22.278: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:22.278: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:22.278: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:15:22.278: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:22.278: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:15:22.278: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:15:22.278: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:15:22.278: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:22.278: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:22.278: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:22.278: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:22.278: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:22.278: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:22.278: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:22.278: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:22.278: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:22.278: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:22.278: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:15:22.278: INFO: Container tas-extender ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Nov 13 05:15:22.313: INFO: Pod cmk-4tcdw requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod cmk-qhvr7 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod cmk-webhook-6c9d5f8578-2gp25 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod kube-flannel-mg66r requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod kube-flannel-r7bbp requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod kube-multus-ds-amd64-2wqj5 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod kube-multus-ds-amd64-4wqsv requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod kube-proxy-p6kbl requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod kube-proxy-pzhf2 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod kubernetes-dashboard-785dcbb76d-w2mls requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod kubernetes-metrics-scraper-5558854cb-jmbpk requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod node-feature-discovery-worker-mm7xs requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod node-feature-discovery-worker-zgr4c requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod collectd-74xkn requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod collectd-mp2z6 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod node-exporter-hqkfs requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod node-exporter-hstd9 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod prometheus-operator-585ccfb458-qcz7s requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-q7m54 requesting local ephemeral resource =0 on Node node2 Nov 13 05:15:22.313: INFO: Pod pod1 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod pod2 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Pod pod3 requesting local ephemeral resource =0 on Node node1 Nov 13 05:15:22.313: INFO: Using pod capacity: 40542413347 Nov 13 05:15:22.313: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Nov 13 05:15:22.313: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Nov 13 05:15:22.504: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b703368895412f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b7033702c51124], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b70337177260d6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 346.892737ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b7033736f7ed89], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b703378505ffca], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b703368914f520], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b7033703bd2b47], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b7033750af6fc3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.290936614s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b70337729bf873], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b70337dc42f701], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b703368e0eb97c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b703389f6434d3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b70338bd5fd732], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 503.02127ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b70338d75fb79f], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b70338def3f2d7], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b703368ea245f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-11 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70338b603903c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70338ef59ff98], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 961.958351ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70339110fb95b], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b703393bde5fa3], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b703368f516836], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b703391cbfc6d2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b70339467cf344], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 700.257637ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b70339587bef10], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b7033961462dde], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b703368fcdb207], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b70338a3b27ed0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b70338b656d007], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 312.745828ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b70338d3ab2cb0], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b703391c6da27b], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b70336904d7dd1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b70338d68d26b5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b7033905181599], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 780.849385ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b703391c992d05], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b703393654dcbd], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b7033690d7b733], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b70337bca385e4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b70337f24b8878], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 900.191511ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b703383f781c26], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b70338bc523285], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b7033691742a9c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b70338b5b8cd39], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b70338d3beae0e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 503.693878ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b70338e12ce2a4], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b703392467a461], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b70336920833a4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b7033775991894], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b70337dbfccfbd], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.717798396s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b70337ee0db81f], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b7033840db41e2], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b7033692a36ba1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b70338dde666e0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b703391cb51b32], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.053727193s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b70339325132b8], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b703394350d93e], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b7033693389dbc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b703391b848cdd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b703392e1925a8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 311.71792ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b7033937e0bbcc], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b703395d29e364], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b7033689a3f39a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b7033762d3e189], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b703379b4ed56f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 947.570823ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b703382cbccda3], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b70338ad41936e], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b703368a2c99c3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b70338d9b523ff], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b70338eda7a78a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 334.655393ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b70338f441a15d], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b70338fe5d197a], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b703368aac54d9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b70338dae26047], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b703390b3a74f7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 811.072255ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b703391210ec32], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b703391aae3d38], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b703368b2c4819], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b70338dc0b7d2e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b703391f00576f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.123337538s] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b703392583f51e], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b703392c673b70], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b703368bb0b1bf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b703376444b0cb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b70337e5e3b101], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 2.174673001s] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b70338204a203f], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b7033856e81ee8], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b703368c393959], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b70338dc138f4f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b7033934538f46], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.480577881s] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b703393a70bc08], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b7033941c6dae9], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b703368cdc754f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b70338258d59f0], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b703383ab12847], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 354.659113ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b70338795f366f], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b70338c28320ff], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b703368d6f9ee2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4527/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b7033855afd406], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b7033866f8509e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 289.954794ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b703387f6ad652], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b70338bb809275], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b7033a1535629b], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:15:38.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4527" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.386 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":10,"skipped":3923,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:15:38.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:15:38.639: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:15:38.648: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:15:38.651: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:15:38.659: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:15:38.659: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:38.659: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:38.659: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:15:38.659: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:38.659: INFO: Container init ready: false, restart count 0 Nov 13 05:15:38.659: INFO: Container install ready: false, restart count 0 Nov 13 05:15:38.659: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.659: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:15:38.659: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.659: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:15:38.660: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:38.660: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:15:38.660: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:38.660: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:38.660: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:38.660: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:38.660: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:38.660: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:38.660: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:38.660: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:15:38.660: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container grafana ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:15:38.660: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:15:38.660: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-1 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-1 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-11 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-11 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-12 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-12 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-13 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-13 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-14 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-14 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-15 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-15 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-16 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-16 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-17 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-17 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-18 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-18 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: overcommit-19 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.660: INFO: Container overcommit-19 ready: true, restart count 0 Nov 13 05:15:38.660: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:15:38.671: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:15:38.671: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:38.671: INFO: Container init ready: false, restart count 0 Nov 13 05:15:38.671: INFO: Container install ready: false, restart count 0 Nov 13 05:15:38.671: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:15:38.671: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:38.671: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:38.671: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:15:38.671: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:38.671: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:15:38.671: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:15:38.671: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:15:38.671: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:38.671: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:38.671: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:38.671: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:38.671: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:38.671: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:38.671: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:38.671: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:38.671: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:38.671: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:38.671: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-0 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-0 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-10 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-10 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-2 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-2 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-3 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-3 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-4 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-4 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-5 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-5 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-6 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-6 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-7 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-7 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-8 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-8 ready: true, restart count 0 Nov 13 05:15:38.671: INFO: overcommit-9 from sched-pred-4527 started at 2021-11-13 05:15:22 +0000 UTC (1 container statuses recorded) Nov 13 05:15:38.671: INFO: Container overcommit-9 ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:15:54.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-660" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.177 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":11,"skipped":5333,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:15:54.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:15:54.823: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:15:54.831: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:15:54.839: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:15:54.855: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:15:54.855: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:54.855: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:15:54.855: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:54.855: INFO: Container init ready: false, restart count 0 Nov 13 05:15:54.855: INFO: Container install ready: false, restart count 0 Nov 13 05:15:54.855: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:15:54.855: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:15:54.855: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:54.855: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:15:54.855: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:54.855: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:54.855: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:54.855: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:54.855: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:54.855: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:54.855: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:54.855: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:15:54.855: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container grafana ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:15:54.855: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:15:54.855: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:15:54.855: INFO: rs-e2e-pts-filter-k5lfk from sched-pred-660 started at 2021-11-13 05:15:48 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:15:54.855: INFO: rs-e2e-pts-filter-z8bbz from sched-pred-660 started at 2021-11-13 05:15:48 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.855: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:15:54.855: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:15:54.870: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:15:54.870: INFO: Container discover ready: false, restart count 0 Nov 13 05:15:54.870: INFO: Container init ready: false, restart count 0 Nov 13 05:15:54.870: INFO: Container install ready: false, restart count 0 Nov 13 05:15:54.870: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:15:54.870: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:15:54.870: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:15:54.870: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:15:54.870: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:15:54.870: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:15:54.870: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:15:54.870: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:15:54.870: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:15:54.870: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:15:54.870: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:15:54.870: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:15:54.870: INFO: Container collectd ready: true, restart count 0 Nov 13 05:15:54.870: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:15:54.870: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:15:54.870: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:15:54.870: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:15:54.870: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:15:54.870: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:15:54.870: INFO: rs-e2e-pts-filter-hnpl8 from sched-pred-660 started at 2021-11-13 05:15:48 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:15:54.870: INFO: rs-e2e-pts-filter-klrvd from sched-pred-660 started at 2021-11-13 05:15:48 +0000 UTC (1 container statuses recorded) Nov 13 05:15:54.870: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-0d08282e-c035-4862-aaab-9221a27f5b24 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-0d08282e-c035-4862-aaab-9221a27f5b24 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-0d08282e-c035-4862-aaab-9221a27f5b24 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:16:02.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4024" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.157 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":12,"skipped":5489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:16:02.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 05:16:02.972: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:17:03.035: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:17:03.038: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:17:03.059: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:17:03.059: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:17:03.059: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:17:03.059: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:17:03.077: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:17:03.077: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:17:03.077: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:03.077: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:17:03.077: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Nov 13 05:17:11.179: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.179: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:17:11.180: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 05:17:11.180: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:17:11.180: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:17:11.180: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:17:11.191: INFO: Waiting for running... Nov 13 05:17:11.195: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:17:16.264: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:17:16.264: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:17:16.264: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Pod for on the node: fbd7e98f-48fa-43fd-b85a-328e06186b0a-0, Cpu: 38400, Mem: 89350039552 Nov 13 05:17:16.264: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:17:16.264: INFO: Node: node1, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884632576, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:17:42.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7506" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:99.402 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":13,"skipped":5528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 13 05:17:42.359: INFO: Running AfterSuite actions on all nodes Nov 13 05:17:42.359: INFO: Running AfterSuite actions on node 1 Nov 13 05:17:42.359: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 536.720 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 8m58.041341255s Test Suite Passed