I0827 14:17:00.310383 17 test_context.go:457] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0827 14:17:00.310704 17 e2e.go:129] Starting e2e run "ec443749-807a-4d7f-b17b-507ce1432658" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630073818 - Will randomize all specs Will run 12 of 5668 specs Aug 27 14:17:00.338: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:17:00.342: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 27 14:17:00.371: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 14:17:00.419: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 14:17:00.419: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 27 14:17:00.419: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 27 14:17:00.428: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 27 14:17:00.428: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 27 14:17:00.428: INFO: e2e test version: v1.20.10 Aug 27 14:17:00.430: INFO: kube-apiserver version: v1.20.7 Aug 27 14:17:00.430: INFO: >>> kubeConfig: /root/.kube/config Aug 27 14:17:00.438: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:17:00.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption Aug 27 14:17:00.962: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 27 14:17:00.973: INFO: No PSP annotation exists on dry run pod; assuming PodSecurityPolicy is disabled STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Aug 27 14:17:01.070: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 14:18:01.101: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node capi-leguer-md-0-555f949c67-5brzb. STEP: Apply 10 fake resource to node capi-leguer-md-0-555f949c67-tw45m. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node capi-leguer-md-0-555f949c67-tw45m STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:18:37.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-9378" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.984 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":1,"skipped":24,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:304 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:18:37.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:137 Aug 27 14:18:37.469: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 14:19:37.501: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:19:37.504: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 14:19:37.518: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 14:19:37.518: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:304 Aug 27 14:19:37.526: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 800, cpuAllocatableMil: 88000, cpuFraction: 0.00909090909090909 Aug 27 14:19:37.526: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 1572864000, memAllocatableVal: 67430219776, memFraction: 0.023325802662737566 Aug 27 14:19:37.526: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:19:37.526: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 900, cpuAllocatableMil: 88000, cpuFraction: 0.010227272727272727 Aug 27 14:19:37.526: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 1782579200, memAllocatableVal: 67430219776, memFraction: 0.026435909684435908 Aug 27 14:19:37.537: INFO: Waiting for running... Aug 27 14:19:37.538: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:19:42.600: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 344900, cpuAllocatableMil: 88000, cpuFraction: 1 Aug 27 14:19:42.600: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 255565103104, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:19:42.600: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Pod for on the node: 9e0b7a03-a1a9-4f3a-9bca-2aa564999c01-0, Cpu: 43100, Mem: 31932530688 Aug 27 14:19:42.600: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 388000, cpuAllocatableMil: 88000, cpuFraction: 1 Aug 27 14:19:42.600: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 287497633792, memAllocatableVal: 67430219776, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f9b9708a-acde-4807-8ecf-5c9db7c83752=testing-taint-value-f4cbe7cd-426c-4011-b566-37fdbdc06639:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-79555cea-7d3e-424a-aff6-1189717d1637=testing-taint-value-4086fe51-e572-4836-861c-e6daf4b93698:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-73b1c58b-e999-4813-9942-cbdb64b17130=testing-taint-value-db221f19-b6be-44a2-a8f5-07cc7525783a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2ab55f6b-e631-4986-b2e7-5999c36fa87c=testing-taint-value-6bd88921-701b-4016-8693-29167d1b77e9:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e2d72725-cbec-4e0b-88bd-4f23ee0e04a9=testing-taint-value-6f7bcc62-53b3-412a-af8d-9c1153518650:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-912f77a5-8f40-4890-99e6-06e75c61d2f4=testing-taint-value-b69afa01-ddea-45c1-840d-ecd7937e7603:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-a39957b3-6b97-481e-93ed-a7dd5df6bbbf=testing-taint-value-8d9923c3-0d26-420a-b71b-2d878b91aedf:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7bca5e27-3a7e-4f3d-92fb-3ee025cc7b4c=testing-taint-value-8243220b-4b69-4a42-84d0-863486565fc1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3b332449-b0d6-4e67-9349-433faa46098d=testing-taint-value-a5156cd9-b08b-4858-a2ed-1afba3ba3475:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-adbf31cf-7c63-4e0e-8092-95c7672c66a3=testing-taint-value-e6e606d7-c8ec-4363-94ef-9b1c7e1abbb0:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f5f5cfc3-1bca-4994-8494-605be458cf59=testing-taint-value-fd64619d-da88-4f42-99b3-6cf38795bb75:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8e0780de-4e7b-43ea-97a0-e243bfb40bc2=testing-taint-value-003d6acc-8efb-47e1-a16d-f46eb6a73c24:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5fc6df4c-a9e9-4539-a7fa-9ab52f28576c=testing-taint-value-01b1d133-9f38-434b-920f-9cf6640d69ac:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-74e0eacc-cf81-474d-93a2-d073cf89fa05=testing-taint-value-b9f85967-3eac-40e6-906a-090568705808:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0e2321a2-a3e5-4f32-b742-2f5de6f7af04=testing-taint-value-a3bfddeb-a188-42c5-900b-6c17ccccab05:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-def1a43b-2ab5-452c-8680-509e5795c649=testing-taint-value-1e67d29c-b42e-41ae-86d4-11f8188147ed:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-961f27e9-0c12-469a-ae18-893734261aba=testing-taint-value-28bdc857-08d6-4703-91bf-55c2c3c39243:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-9076d377-e7a8-4c21-9406-a952eb8c95dd=testing-taint-value-80f986b6-655d-4569-b8b0-c7a62b344471:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-924aa318-8d1f-44f8-92c1-ba7194a7bfad=testing-taint-value-36d036c1-a4a8-43ca-aa8f-cf487b808cd0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-d6fcfe21-8a43-439f-a098-7cf1eb5c9c70=testing-taint-value-3f79d165-be36-440d-a308-617f1986c30c:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-d6fcfe21-8a43-439f-a098-7cf1eb5c9c70=testing-taint-value-3f79d165-be36-440d-a308-617f1986c30c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-924aa318-8d1f-44f8-92c1-ba7194a7bfad=testing-taint-value-36d036c1-a4a8-43ca-aa8f-cf487b808cd0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-9076d377-e7a8-4c21-9406-a952eb8c95dd=testing-taint-value-80f986b6-655d-4569-b8b0-c7a62b344471:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-961f27e9-0c12-469a-ae18-893734261aba=testing-taint-value-28bdc857-08d6-4703-91bf-55c2c3c39243:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-def1a43b-2ab5-452c-8680-509e5795c649=testing-taint-value-1e67d29c-b42e-41ae-86d4-11f8188147ed:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0e2321a2-a3e5-4f32-b742-2f5de6f7af04=testing-taint-value-a3bfddeb-a188-42c5-900b-6c17ccccab05:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-74e0eacc-cf81-474d-93a2-d073cf89fa05=testing-taint-value-b9f85967-3eac-40e6-906a-090568705808:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5fc6df4c-a9e9-4539-a7fa-9ab52f28576c=testing-taint-value-01b1d133-9f38-434b-920f-9cf6640d69ac:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8e0780de-4e7b-43ea-97a0-e243bfb40bc2=testing-taint-value-003d6acc-8efb-47e1-a16d-f46eb6a73c24:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f5f5cfc3-1bca-4994-8494-605be458cf59=testing-taint-value-fd64619d-da88-4f42-99b3-6cf38795bb75:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-adbf31cf-7c63-4e0e-8092-95c7672c66a3=testing-taint-value-e6e606d7-c8ec-4363-94ef-9b1c7e1abbb0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3b332449-b0d6-4e67-9349-433faa46098d=testing-taint-value-a5156cd9-b08b-4858-a2ed-1afba3ba3475:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7bca5e27-3a7e-4f3d-92fb-3ee025cc7b4c=testing-taint-value-8243220b-4b69-4a42-84d0-863486565fc1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-a39957b3-6b97-481e-93ed-a7dd5df6bbbf=testing-taint-value-8d9923c3-0d26-420a-b71b-2d878b91aedf:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-912f77a5-8f40-4890-99e6-06e75c61d2f4=testing-taint-value-b69afa01-ddea-45c1-840d-ecd7937e7603:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e2d72725-cbec-4e0b-88bd-4f23ee0e04a9=testing-taint-value-6f7bcc62-53b3-412a-af8d-9c1153518650:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2ab55f6b-e631-4986-b2e7-5999c36fa87c=testing-taint-value-6bd88921-701b-4016-8693-29167d1b77e9:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-73b1c58b-e999-4813-9942-cbdb64b17130=testing-taint-value-db221f19-b6be-44a2-a8f5-07cc7525783a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-79555cea-7d3e-424a-aff6-1189717d1637=testing-taint-value-4086fe51-e572-4836-861c-e6daf4b93698:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f9b9708a-acde-4807-8ecf-5c9db7c83752=testing-taint-value-f4cbe7cd-426c-4011-b566-37fdbdc06639:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:19:48.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-5694" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:134 • [SLOW TEST:71.030 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:304 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":2,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:19:48.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:19:48.511: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:19:48.520: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:19:48.523: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:19:48.530: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:48.530: INFO: Container astaire ready: true, restart count 0 Aug 27 14:19:48.530: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:48.530: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.530: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:19:48.530: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.530: INFO: Container ellis ready: true, restart count 0 Aug 27 14:19:48.530: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.530: INFO: Container homer ready: true, restart count 0 Aug 27 14:19:48.530: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:48.530: INFO: Container homestead ready: true, restart count 0 Aug 27 14:19:48.530: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:48.530: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.530: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:19:48.530: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.530: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:19:48.530: INFO: with-tolerations from sched-priority-5694 started at 2021-08-27 14:19:43 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.530: INFO: Container with-tolerations ready: true, restart count 0 Aug 27 14:19:48.530: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:19:48.537: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:48.537: INFO: Container bono ready: true, restart count 0 Aug 27 14:19:48.537: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:48.537: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:48.537: INFO: Container chronos ready: true, restart count 0 Aug 27 14:19:48.537: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:48.537: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.537: INFO: Container etcd ready: true, restart count 0 Aug 27 14:19:48.537: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.537: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:19:48.537: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:48.537: INFO: Container ralf ready: true, restart count 0 Aug 27 14:19:48.537: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:48.537: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:19:48.537: INFO: Container sprout ready: true, restart count 0 Aug 27 14:19:48.537: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:48.537: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.537: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:19:48.537: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:48.537: INFO: Container kube-proxy ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 Aug 27 14:19:48.556: INFO: Pod astaire-58968c8b7f-2cfpc requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod bono-6957967566-mbkl6 requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod cassandra-5b9d7c8d97-mtg6p requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod chronos-f6f76cf57-29d9g requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod ellis-6d4bcd9976-wjzcr requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod etcd-744b4d9f98-wlr24 requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod homer-74f8c889f9-dp4pj requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod homestead-f47c95f88-r5gtl requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod homestead-prov-77b78dd7f8-nz7qc requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod ralf-8597986d58-p7crz requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod sprout-58578d4fcd-89l45 requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod kindnet-b64vj requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod kindnet-fp7vq requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod kube-proxy-6wb6p requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Pod kube-proxy-kg48d requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-tw45m Aug 27 14:19:48.556: INFO: Pod with-tolerations requesting local ephemeral resource =0 on Node capi-leguer-md-0-555f949c67-5brzb Aug 27 14:19:48.556: INFO: Using pod capacity: 47063248896 Aug 27 14:19:48.556: INFO: Node: capi-leguer-md-0-555f949c67-tw45m has local ephemeral resource allocatable: 470632488960 Aug 27 14:19:48.556: INFO: Node: capi-leguer-md-0-555f949c67-5brzb has local ephemeral resource allocatable: 470632488960 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Aug 27 14:19:48.638: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f2fa7f63fff14], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-0 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f2fa84f9c7c50], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f2fa85105dc80], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f2fa890ba13c4], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f2fa7f670d99f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-1 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f2fa83bd29f7e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f2fa83d8ce11b], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f2fa85397dc3e], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f2fa7f8a23603], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-10 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f2fa891fe85eb], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f2fa8957546a3], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f2fa8c5efb555], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f2fa7f8cfa0cd], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-11 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f2fa8920e77ca], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f2fa895fe0f0c], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f2fa8c6106475], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f2fa7f9060427], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-12 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f2fa89215b955], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f2fa8952ff562], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f2fa8c594a6bb], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f2fa7f9419f89], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-13 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f2fa8912b18a0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f2fa8941856a1], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f2fa8c635b7ce], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f2fa7f98eb81a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-14 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f2fa891da6c1b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f2fa895ea76a9], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f2fa8c5eea13e], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f2fa7f9c1d73f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-15 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f2fa891fbe507], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f2fa895d41b6d], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f2fa8c621dcc8], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f2fa7f9f55dea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-16 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f2fa8920e7053], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f2fa89614efc0], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f2fa8c61e4c2e], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f2fa7fa2a652a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-17 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f2fa891fc6a87], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f2fa89481d454], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f2fa8c602837a], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f2fa7fa6122ea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-18 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f2fa8920ec36c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f2fa896244bae], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f2fa8c5f8be68], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f2fa7fa970e8c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-19 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f2fa891cd7282], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f2fa89557b984], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f2fa8c68cd5d1], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f2fa7f6b9736a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-2 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f2fa891ff51e1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f2fa895110ac5], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f2fa8c6cc4255], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f2fa7f705b650], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-3 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f2fa892029d27], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f2fa894b8ad77], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f2fa8c6b887a7], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f2fa7f7339083], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-4 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f2fa84fa4e74f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f2fa850fc66bd], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f2fa88fb52680], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f2fa7f7783635], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-5 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f2fa891fc5c14], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f2fa894724826], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f2fa8c5a192d3], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f2fa7f7cc7049], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-6 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f2fa891fbe96a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f2fa895f97152], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f2fa8c6176b37], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f2fa7f8027b53], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-7 to capi-leguer-md-0-555f949c67-tw45m] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f2fa891d21548], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f2fa895a98561], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f2fa8c5ebe1e3], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f2fa7f83dbd43], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-8 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f2fa84fb61ab1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f2fa85126403f], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f2fa88fa1c4de], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f2fa7f8668387], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9330/overcommit-9 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f2fa891cd6f8b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f2fa89444b0c8], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f2fa8c6da674d], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.169f2faa52a6e8f3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient ephemeral-storage.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:19:59.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9330" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:11.262 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:122 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":3,"skipped":1145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:19:59.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:19:59.771: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:19:59.779: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:19:59.782: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:19:59.791: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:59.791: INFO: Container astaire ready: true, restart count 0 Aug 27 14:19:59.791: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:59.791: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.791: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:19:59.791: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.791: INFO: Container ellis ready: true, restart count 0 Aug 27 14:19:59.791: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.791: INFO: Container homer ready: true, restart count 0 Aug 27 14:19:59.791: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:59.791: INFO: Container homestead ready: true, restart count 0 Aug 27 14:19:59.791: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:59.791: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.791: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:19:59.791: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.791: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:19:59.791: INFO: overcommit-0 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.791: INFO: Container overcommit-0 ready: true, restart count 0 Aug 27 14:19:59.791: INFO: overcommit-1 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-1 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-12 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-12 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-13 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-13 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-17 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-17 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-2 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-2 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-3 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-3 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-5 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-5 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-8 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-8 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: overcommit-9 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.792: INFO: Container overcommit-9 ready: true, restart count 0 Aug 27 14:19:59.792: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:19:59.801: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:59.801: INFO: Container bono ready: true, restart count 0 Aug 27 14:19:59.801: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:59.801: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:59.801: INFO: Container chronos ready: true, restart count 0 Aug 27 14:19:59.801: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:59.801: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container etcd ready: true, restart count 0 Aug 27 14:19:59.801: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:19:59.801: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:19:59.801: INFO: Container ralf ready: true, restart count 0 Aug 27 14:19:59.801: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:59.801: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:19:59.801: INFO: Container sprout ready: true, restart count 0 Aug 27 14:19:59.801: INFO: Container tailer ready: true, restart count 0 Aug 27 14:19:59.801: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:19:59.801: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-10 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-10 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-11 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-11 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-14 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-14 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-15 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-15 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-16 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-16 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-18 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-18 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-19 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-19 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-4 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-4 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-6 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-6 ready: true, restart count 0 Aug 27 14:19:59.801: INFO: overcommit-7 from sched-pred-9330 started at 2021-08-27 14:19:48 +0000 UTC (1 container statuses recorded) Aug 27 14:19:59.801: INFO: Container overcommit-7 ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.169f2fabfe0d02f0], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match Pod's node affinity.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:20:06.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8415" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:7.157 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:489 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":4,"skipped":1498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:20:06.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:20:06.936: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:20:06.944: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:20:06.948: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:20:06.962: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:06.962: INFO: Container astaire ready: true, restart count 0 Aug 27 14:20:06.962: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:06.962: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.962: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:20:06.962: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.962: INFO: Container ellis ready: true, restart count 0 Aug 27 14:20:06.962: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.962: INFO: Container homer ready: true, restart count 0 Aug 27 14:20:06.962: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:06.962: INFO: Container homestead ready: true, restart count 0 Aug 27 14:20:06.962: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:06.962: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.963: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:20:06.963: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.963: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:20:06.963: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:20:06.969: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:06.969: INFO: Container bono ready: true, restart count 0 Aug 27 14:20:06.969: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:06.969: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:06.969: INFO: Container chronos ready: true, restart count 0 Aug 27 14:20:06.969: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:06.969: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.969: INFO: Container etcd ready: true, restart count 0 Aug 27 14:20:06.969: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.969: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:20:06.969: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:06.969: INFO: Container ralf ready: true, restart count 0 Aug 27 14:20:06.969: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:06.969: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:20:06.969: INFO: Container sprout ready: true, restart count 0 Aug 27 14:20:06.969: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:06.969: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.969: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:20:06.969: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:06.969: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:578 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e88ee259-0e08-4f31-903a-1746e10f50e8=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-9f6fb450-d8b9-491b-8d52-c6de0d503efd testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-9f6fb450-d8b9-491b-8d52-c6de0d503efd off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-9f6fb450-d8b9-491b-8d52-c6de0d503efd STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e88ee259-0e08-4f31-903a-1746e10f50e8=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:20:11.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9069" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":5,"skipped":1554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:20:11.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:20:11.148: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:20:11.157: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:20:11.160: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:20:11.168: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:11.168: INFO: Container astaire ready: true, restart count 0 Aug 27 14:20:11.168: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:11.168: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.168: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:20:11.168: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.168: INFO: Container ellis ready: true, restart count 0 Aug 27 14:20:11.168: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.168: INFO: Container homer ready: true, restart count 0 Aug 27 14:20:11.168: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:11.168: INFO: Container homestead ready: true, restart count 0 Aug 27 14:20:11.168: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:11.168: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.168: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:20:11.168: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.168: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:20:11.168: INFO: with-tolerations from sched-pred-9069 started at 2021-08-27 14:20:09 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.168: INFO: Container with-tolerations ready: true, restart count 0 Aug 27 14:20:11.168: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:20:11.176: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:11.176: INFO: Container bono ready: true, restart count 0 Aug 27 14:20:11.176: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:11.176: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:11.176: INFO: Container chronos ready: true, restart count 0 Aug 27 14:20:11.176: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:11.176: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.176: INFO: Container etcd ready: true, restart count 0 Aug 27 14:20:11.176: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.176: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:20:11.176: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:20:11.176: INFO: Container ralf ready: true, restart count 0 Aug 27 14:20:11.176: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:11.176: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:20:11.176: INFO: Container sprout ready: true, restart count 0 Aug 27 14:20:11.176: INFO: Container tailer ready: true, restart count 0 Aug 27 14:20:11.176: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.176: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:20:11.176: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:20:11.176: INFO: Container kube-proxy ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:216 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9.169f2fae2cb4fc7f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9.169f2faecddb7709], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6354/filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9 to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9.169f2faefaa14ae1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9.169f2faefc154427], Reason = [Created], Message = [Created container filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9] STEP: Considering event: Type = [Normal], Name = [filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9.169f2faf0737e556], Reason = [Started], Message = [Started container filler-pod-a76cdfd2-3663-4574-917e-c0c4716c92c9] STEP: Considering event: Type = [Normal], Name = [without-label.169f2fadb3570902], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6354/without-label to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [without-label.169f2fadde9fd2a1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-label.169f2fade00612e5], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.169f2fade9d76eec], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.169f2fae2b4939e1], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.169f2fae3948ce8b], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Considering event: Type = [Warning], Name = [additional-podc859e18d-5274-49d4-b1ac-1b62a31dfe0c.169f2faf93a9b921], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:251 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:20:22.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6354" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:11.198 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:211 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:271 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":6,"skipped":2218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:20:22.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:137 Aug 27 14:20:22.363: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 14:21:22.394: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:21:22.397: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 14:21:22.412: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 14:21:22.412: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Aug 27 14:21:24.449: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Aug 27 14:21:24.449: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Aug 27 14:21:24.449: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:24.449: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Aug 27 14:21:24.449: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 Aug 27 14:21:24.456: INFO: Waiting for running... Aug 27 14:21:24.456: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:21:29.519: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Aug 27 14:21:29.519: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:21:29.519: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 27 14:21:29.519: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 100, cpuAllocatableMil: 88000, cpuFraction: 0.0011363636363636363 Aug 27 14:21:29.519: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 104857600, memAllocatableVal: 67430219776, memFraction: 0.001555053510849171 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:21:47.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1555" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:134 • [SLOW TEST:85.268 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":7,"skipped":2382,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:21:47.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:21:47.615: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:21:47.624: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:21:47.628: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:21:47.635: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:21:47.635: INFO: Container astaire ready: true, restart count 0 Aug 27 14:21:47.635: INFO: Container tailer ready: true, restart count 0 Aug 27 14:21:47.635: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.635: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:21:47.635: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.635: INFO: Container ellis ready: true, restart count 0 Aug 27 14:21:47.635: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.635: INFO: Container homer ready: true, restart count 0 Aug 27 14:21:47.635: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:21:47.636: INFO: Container homestead ready: true, restart count 0 Aug 27 14:21:47.636: INFO: Container tailer ready: true, restart count 0 Aug 27 14:21:47.636: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.636: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:21:47.636: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.636: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:21:47.636: INFO: pod-with-label-security-s1 from sched-priority-1555 started at 2021-08-27 14:21:22 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.636: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 Aug 27 14:21:47.636: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:21:47.644: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:21:47.644: INFO: Container bono ready: true, restart count 0 Aug 27 14:21:47.644: INFO: Container tailer ready: true, restart count 0 Aug 27 14:21:47.644: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:21:47.644: INFO: Container chronos ready: true, restart count 0 Aug 27 14:21:47.644: INFO: Container tailer ready: true, restart count 0 Aug 27 14:21:47.644: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.644: INFO: Container etcd ready: true, restart count 0 Aug 27 14:21:47.644: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.644: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:21:47.644: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:21:47.644: INFO: Container ralf ready: true, restart count 0 Aug 27 14:21:47.644: INFO: Container tailer ready: true, restart count 0 Aug 27 14:21:47.644: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:21:47.644: INFO: Container sprout ready: true, restart count 0 Aug 27 14:21:47.644: INFO: Container tailer ready: true, restart count 0 Aug 27 14:21:47.644: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.644: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:21:47.644: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.644: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:21:47.644: INFO: pod-with-pod-antiaffinity from sched-priority-1555 started at 2021-08-27 14:21:29 +0000 UTC (1 container statuses recorded) Aug 27 14:21:47.645: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:530 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b92c0574-faf6-4833-80a2-f4430172173d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b92c0574-faf6-4833-80a2-f4430172173d off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-b92c0574-faf6-4833-80a2-f4430172173d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:21:51.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4254" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":8,"skipped":2399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:360 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:21:51.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:137 Aug 27 14:21:51.785: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 14:22:51.817: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:22:51.820: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 14:22:51.834: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 14:22:51.834: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:360 Aug 27 14:22:55.931: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 800, cpuAllocatableMil: 88000, cpuFraction: 0.00909090909090909 Aug 27 14:22:55.931: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 1572864000, memAllocatableVal: 67430219776, memFraction: 0.023325802662737566 Aug 27 14:22:55.931: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.931: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.932: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.932: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:22:55.932: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 900, cpuAllocatableMil: 88000, cpuFraction: 0.010227272727272727 Aug 27 14:22:55.932: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 1782579200, memAllocatableVal: 67430219776, memFraction: 0.026435909684435908 Aug 27 14:22:55.937: INFO: Waiting for running... Aug 27 14:22:55.937: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:23:00.999: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 345700, cpuAllocatableMil: 88000, cpuFraction: 1 Aug 27 14:23:00.999: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 257242824704, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:23:00.999: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Pod for on the node: 815520a7-74d7-4d79-9e0c-81393a0f8337-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:23:00.999: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 388900, cpuAllocatableMil: 88000, cpuFraction: 1 Aug 27 14:23:00.999: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 289385070592, memAllocatableVal: 67430219776, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "capi-leguer-md-0-555f949c67-5brzb" STEP: Verifying if the test-pod lands on node "capi-leguer-md-0-555f949c67-tw45m" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:354 STEP: removing the label kubernetes.io/e2e-pts-score off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node capi-leguer-md-0-555f949c67-tw45m STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:23:17.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9835" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:134 • [SLOW TEST:85.429 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:342 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:360 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":9,"skipped":3556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:240 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:23:17.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:137 Aug 27 14:23:17.228: INFO: Waiting up to 1m0s for all nodes to be ready Aug 27 14:24:17.258: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:24:17.261: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 27 14:24:17.274: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 27 14:24:17.274: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:240 Aug 27 14:24:17.283: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 800, cpuAllocatableMil: 88000, cpuFraction: 0.00909090909090909 Aug 27 14:24:17.283: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 1572864000, memAllocatableVal: 67430219776, memFraction: 0.023325802662737566 Aug 27 14:24:17.283: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Pod for on the node: kube-scheduler-capi-leguer-control-plane-mt48s, Cpu: 100, Mem: 209715200 Aug 27 14:24:17.283: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 900, cpuAllocatableMil: 88000, cpuFraction: 0.010227272727272727 Aug 27 14:24:17.283: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 1782579200, memAllocatableVal: 67430219776, memFraction: 0.026435909684435908 Aug 27 14:24:17.294: INFO: Waiting for running... Aug 27 14:24:17.295: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:24:22.354: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-5brzb Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedCPUResource: 345700, cpuAllocatableMil: 88000, cpuFraction: 1 Aug 27 14:24:22.354: INFO: Node: capi-leguer-md-0-555f949c67-5brzb, totalRequestedMemResource: 257242824704, memAllocatableVal: 67430219776, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 27 14:24:22.354: INFO: ComputeCPUMemFraction for node: capi-leguer-md-0-555f949c67-tw45m Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Pod for on the node: af2bbd0f-f2ce-480b-a74a-a7a6ec6fa2f5-0, Cpu: 43200, Mem: 32142245888 Aug 27 14:24:22.354: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedCPUResource: 388900, cpuAllocatableMil: 88000, cpuFraction: 1 Aug 27 14:24:22.354: INFO: Node: capi-leguer-md-0-555f949c67-tw45m, totalRequestedMemResource: 289385070592, memAllocatableVal: 67430219776, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-9716 to 1 STEP: Verify the pods should not scheduled to the node: capi-leguer-md-0-555f949c67-5brzb STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-9716, will wait for the garbage collector to delete the pods Aug 27 14:24:28.549: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 6.983348ms Aug 27 14:24:29.050: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 500.314569ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:24:45.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-9716" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:134 • [SLOW TEST:88.484 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:240 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":10,"skipped":5022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:24:45.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:24:45.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:24:45.742: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:24:45.746: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:24:45.753: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:45.753: INFO: Container astaire ready: true, restart count 0 Aug 27 14:24:45.753: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:45.753: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.753: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:24:45.753: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.753: INFO: Container ellis ready: true, restart count 0 Aug 27 14:24:45.753: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.753: INFO: Container homer ready: true, restart count 0 Aug 27 14:24:45.753: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:45.753: INFO: Container homestead ready: true, restart count 0 Aug 27 14:24:45.753: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:45.753: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.753: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:24:45.753: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.753: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:24:45.753: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:24:45.759: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:45.759: INFO: Container bono ready: true, restart count 0 Aug 27 14:24:45.759: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:45.759: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:45.759: INFO: Container chronos ready: true, restart count 0 Aug 27 14:24:45.759: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:45.759: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.759: INFO: Container etcd ready: true, restart count 0 Aug 27 14:24:45.759: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.759: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:24:45.759: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:45.759: INFO: Container ralf ready: true, restart count 0 Aug 27 14:24:45.759: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:45.759: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:24:45.759: INFO: Container sprout ready: true, restart count 0 Aug 27 14:24:45.759: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:45.759: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.759: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:24:45.759: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:45.759: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-082acded-a3c4-4994-a8a7-38a170870f53=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-fb9e3cab-d606-4dca-9628-31a8a023771e testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed28eb3e1a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1848/without-toleration to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed547f2f66], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed5596282c], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed609f6e57], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2feda14a40b2], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.169f2feda7a729fd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-082acded-a3c4-4994-a8a7-38a170870f53: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fedb5b741fc], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.169f2feda7a729fd], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) didn't match Pod's node affinity, 1 node(s) had taint {kubernetes.io/e2e-taint-key-082acded-a3c4-4994-a8a7-38a170870f53: testing-taint-value}, that the pod didn't tolerate, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed28eb3e1a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1848/without-toleration to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed547f2f66], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed5596282c], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fed609f6e57], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2feda14a40b2], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f2fedb5b741fc], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-082acded-a3c4-4994-a8a7-38a170870f53=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.169f2fee261a63a3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1848/still-no-tolerations to capi-leguer-md-0-555f949c67-5brzb] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.169f2fee51728d7f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.169f2fee52c304e5], Reason = [Created], Message = [Created container still-no-tolerations] STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.169f2fee5b33914e], Reason = [Started], Message = [Started container still-no-tolerations] STEP: removing the label kubernetes.io/e2e-label-key-fb9e3cab-d606-4dca-9628-31a8a023771e off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-fb9e3cab-d606-4dca-9628-31a8a023771e STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-082acded-a3c4-4994-a8a7-38a170870f53=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:24:50.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1848" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:5.278 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:621 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":11,"skipped":5549,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 27 14:24:50.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:92 Aug 27 14:24:50.998: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 27 14:24:51.007: INFO: Waiting for terminating namespaces to be deleted... Aug 27 14:24:51.010: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-5brzb before test Aug 27 14:24:51.017: INFO: astaire-58968c8b7f-2cfpc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:51.017: INFO: Container astaire ready: true, restart count 0 Aug 27 14:24:51.017: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:51.017: INFO: cassandra-5b9d7c8d97-mtg6p from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.017: INFO: Container cassandra ready: true, restart count 0 Aug 27 14:24:51.017: INFO: ellis-6d4bcd9976-wjzcr from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.017: INFO: Container ellis ready: true, restart count 0 Aug 27 14:24:51.017: INFO: homer-74f8c889f9-dp4pj from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.017: INFO: Container homer ready: true, restart count 0 Aug 27 14:24:51.017: INFO: homestead-f47c95f88-r5gtl from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:51.017: INFO: Container homestead ready: true, restart count 0 Aug 27 14:24:51.017: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:51.017: INFO: kindnet-b64vj from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.017: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:24:51.017: INFO: kube-proxy-6wb6p from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.017: INFO: Container kube-proxy ready: true, restart count 0 Aug 27 14:24:51.017: INFO: still-no-tolerations from sched-pred-1848 started at 2021-08-27 14:24:50 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.017: INFO: Container still-no-tolerations ready: false, restart count 0 Aug 27 14:24:51.017: INFO: Logging pods the apiserver thinks is on node capi-leguer-md-0-555f949c67-tw45m before test Aug 27 14:24:51.024: INFO: bono-6957967566-mbkl6 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:51.024: INFO: Container bono ready: true, restart count 0 Aug 27 14:24:51.024: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:51.024: INFO: chronos-f6f76cf57-29d9g from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:51.024: INFO: Container chronos ready: true, restart count 0 Aug 27 14:24:51.024: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:51.024: INFO: etcd-744b4d9f98-wlr24 from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.024: INFO: Container etcd ready: true, restart count 0 Aug 27 14:24:51.024: INFO: homestead-prov-77b78dd7f8-nz7qc from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.024: INFO: Container homestead-prov ready: true, restart count 0 Aug 27 14:24:51.024: INFO: ralf-8597986d58-p7crz from ims-ftg7f started at 2021-08-27 08:50:38 +0000 UTC (2 container statuses recorded) Aug 27 14:24:51.024: INFO: Container ralf ready: true, restart count 0 Aug 27 14:24:51.024: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:51.024: INFO: sprout-58578d4fcd-89l45 from ims-ftg7f started at 2021-08-27 08:50:39 +0000 UTC (2 container statuses recorded) Aug 27 14:24:51.024: INFO: Container sprout ready: true, restart count 0 Aug 27 14:24:51.024: INFO: Container tailer ready: true, restart count 0 Aug 27 14:24:51.024: INFO: kindnet-fp7vq from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.024: INFO: Container kindnet-cni ready: true, restart count 0 Aug 27 14:24:51.024: INFO: kube-proxy-kg48d from kube-system started at 2021-08-27 08:49:45 +0000 UTC (1 container statuses recorded) Aug 27 14:24:51.024: INFO: Container kube-proxy ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:788 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:796 STEP: removing the label kubernetes.io/e2e-pts-filter off the node capi-leguer-md-0-555f949c67-5brzb STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node capi-leguer-md-0-555f949c67-tw45m STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 27 14:24:59.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-812" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:83 • [SLOW TEST:8.199 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:784 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:802 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":12,"skipped":5552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 27 14:24:59.165: INFO: Running AfterSuite actions on all nodes Aug 27 14:24:59.166: INFO: Running AfterSuite actions on node 1 Aug 27 14:24:59.166: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5656,"failed":0} Ran 12 of 5668 Specs in 478.832 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5656 Skipped PASS Ginkgo ran 1 suite in 8m0.469063089s Test Suite Passed