I1113 04:56:44.495715 22 e2e.go:129] Starting e2e run "bdfc9b14-d4c4-4c91-84d6-5f57d3f4a279" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636779403 - Will randomize all specs Will run 13 of 5770 specs Nov 13 04:56:44.510: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:56:44.514: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 04:56:44.541: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 04:56:44.610: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 04:56:44.610: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 04:56:44.610: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 04:56:44.610: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 04:56:44.610: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 04:56:44.623: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 04:56:44.623: INFO: e2e test version: v1.21.5 Nov 13 04:56:44.624: INFO: kube-apiserver version: v1.21.1 Nov 13 04:56:44.624: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:56:44.630: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:56:44.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W1113 04:56:44.660638 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:56:44.660: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:56:44.664: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 04:56:44.665: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 04:56:44.678: INFO: Waiting for terminating namespaces to be deleted... Nov 13 04:56:44.682: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 04:56:44.702: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 04:56:44.702: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:56:44.702: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:56:44.702: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 04:56:44.702: INFO: Container discover ready: false, restart count 0 Nov 13 04:56:44.702: INFO: Container init ready: false, restart count 0 Nov 13 04:56:44.702: INFO: Container install ready: false, restart count 0 Nov 13 04:56:44.703: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 04:56:44.703: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 04:56:44.703: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:56:44.703: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 04:56:44.703: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:56:44.703: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:56:44.703: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.703: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:56:44.703: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 04:56:44.703: INFO: Container collectd ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:56:44.703: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 04:56:44.703: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:56:44.703: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 04:56:44.703: INFO: Container config-reloader ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container grafana ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container prometheus ready: true, restart count 1 Nov 13 04:56:44.703: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 04:56:44.703: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 04:56:44.703: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 04:56:44.718: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 04:56:44.718: INFO: Container discover ready: false, restart count 0 Nov 13 04:56:44.718: INFO: Container init ready: false, restart count 0 Nov 13 04:56:44.718: INFO: Container install ready: false, restart count 0 Nov 13 04:56:44.718: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 04:56:44.718: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:56:44.718: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:56:44.718: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 04:56:44.718: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:56:44.718: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 04:56:44.718: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 04:56:44.718: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 04:56:44.718: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:56:44.718: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:56:44.718: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:56:44.718: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 04:56:44.718: INFO: Container collectd ready: true, restart count 0 Nov 13 04:56:44.718: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:56:44.718: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:56:44.718: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 04:56:44.718: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:56:44.718: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:56:44.718: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 04:56:44.718: INFO: Container tas-extender ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-47792a35-c3d1-42de-b2da-d1fdaf83881f 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-47792a35-c3d1-42de-b2da-d1fdaf83881f off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-47792a35-c3d1-42de-b2da-d1fdaf83881f [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:57:00.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2548" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.213 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":1,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:57:00.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 04:57:00.872: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 04:58:00.951: INFO: Waiting for terminating namespaces to be deleted... Nov 13 04:58:00.954: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 04:58:00.972: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 04:58:00.972: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 04:58:00.972: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 04:58:00.972: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 04:58:00.987: INFO: ComputeCPUMemFraction for node: node1 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:58:00.987: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 04:58:00.987: INFO: ComputeCPUMemFraction for node: node2 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:58:00.987: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:58:00.987: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Nov 13 04:58:05.029: INFO: ComputeCPUMemFraction for node: node1 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:58:05.029: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 04:58:05.029: INFO: ComputeCPUMemFraction for node: node2 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:05.029: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:58:05.029: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 04:58:05.040: INFO: Waiting for running... Nov 13 04:58:05.043: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 04:58:10.110: INFO: ComputeCPUMemFraction for node: node1 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:58:10.110: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 04:58:10.110: INFO: ComputeCPUMemFraction for node: node2 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Nov 13 04:58:10.110: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:58:10.110: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:58:22.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8937" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:81.314 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":2,"skipped":176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:58:22.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 04:58:22.189: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 04:58:22.197: INFO: Waiting for terminating namespaces to be deleted... Nov 13 04:58:22.200: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 04:58:22.208: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 04:58:22.208: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:58:22.208: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 04:58:22.208: INFO: Container discover ready: false, restart count 0 Nov 13 04:58:22.208: INFO: Container init ready: false, restart count 0 Nov 13 04:58:22.208: INFO: Container install ready: false, restart count 0 Nov 13 04:58:22.208: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 04:58:22.208: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 04:58:22.208: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:58:22.208: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 04:58:22.208: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:58:22.208: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:58:22.208: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:58:22.208: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 04:58:22.208: INFO: Container collectd ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:58:22.208: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 04:58:22.208: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:58:22.208: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 04:58:22.208: INFO: Container config-reloader ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container grafana ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container prometheus ready: true, restart count 1 Nov 13 04:58:22.208: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 04:58:22.208: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 04:58:22.208: INFO: pod-with-pod-antiaffinity from sched-priority-8937 started at 2021-11-13 04:58:10 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.208: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Nov 13 04:58:22.208: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 04:58:22.219: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 04:58:22.219: INFO: Container discover ready: false, restart count 0 Nov 13 04:58:22.219: INFO: Container init ready: false, restart count 0 Nov 13 04:58:22.219: INFO: Container install ready: false, restart count 0 Nov 13 04:58:22.219: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 04:58:22.219: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:58:22.219: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:58:22.219: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 04:58:22.219: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:58:22.219: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 04:58:22.219: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 04:58:22.219: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 04:58:22.219: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:58:22.219: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:58:22.219: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:58:22.219: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 04:58:22.219: INFO: Container collectd ready: true, restart count 0 Nov 13 04:58:22.219: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:58:22.219: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:58:22.219: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 04:58:22.219: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:58:22.219: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:58:22.219: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container tas-extender ready: true, restart count 0 Nov 13 04:58:22.219: INFO: pod-with-label-security-s1 from sched-priority-8937 started at 2021-11-13 04:58:01 +0000 UTC (1 container statuses recorded) Nov 13 04:58:22.219: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-0ee2bf08-3691-49db-b6e1-41f055573481=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-35778379-4b0c-48dd-a9a0-3eb106d58b75 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-35778379-4b0c-48dd-a9a0-3eb106d58b75 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-35778379-4b0c-48dd-a9a0-3eb106d58b75 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-0ee2bf08-3691-49db-b6e1-41f055573481=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:58:32.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7684" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.172 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":3,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:58:32.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 04:58:32.367: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 04:58:32.375: INFO: Waiting for terminating namespaces to be deleted... Nov 13 04:58:32.378: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 04:58:32.397: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 04:58:32.397: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:58:32.397: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 04:58:32.397: INFO: Container discover ready: false, restart count 0 Nov 13 04:58:32.397: INFO: Container init ready: false, restart count 0 Nov 13 04:58:32.397: INFO: Container install ready: false, restart count 0 Nov 13 04:58:32.397: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 04:58:32.397: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 04:58:32.397: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:58:32.397: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 04:58:32.397: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:58:32.397: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:58:32.397: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.397: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:58:32.397: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 04:58:32.397: INFO: Container collectd ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:58:32.397: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 04:58:32.397: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:58:32.397: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 04:58:32.397: INFO: Container config-reloader ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container grafana ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container prometheus ready: true, restart count 1 Nov 13 04:58:32.397: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 04:58:32.397: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 04:58:32.397: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 04:58:32.417: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 04:58:32.417: INFO: Container discover ready: false, restart count 0 Nov 13 04:58:32.417: INFO: Container init ready: false, restart count 0 Nov 13 04:58:32.417: INFO: Container install ready: false, restart count 0 Nov 13 04:58:32.417: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 04:58:32.417: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:58:32.417: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:58:32.417: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 04:58:32.417: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:58:32.417: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 04:58:32.417: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 04:58:32.417: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 04:58:32.417: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:58:32.417: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:58:32.417: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:58:32.417: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 04:58:32.417: INFO: Container collectd ready: true, restart count 0 Nov 13 04:58:32.417: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:58:32.417: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:58:32.417: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 04:58:32.417: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:58:32.418: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:58:32.418: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.418: INFO: Container tas-extender ready: true, restart count 0 Nov 13 04:58:32.418: INFO: with-tolerations from sched-pred-7684 started at 2021-11-13 04:58:26 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.418: INFO: Container with-tolerations ready: true, restart count 0 Nov 13 04:58:32.418: INFO: pod-with-label-security-s1 from sched-priority-8937 started at 2021-11-13 04:58:01 +0000 UTC (1 container statuses recorded) Nov 13 04:58:32.418: INFO: Container pod-with-label-security-s1 ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b7024b6700d208], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:58:33.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8678" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":4,"skipped":910,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:58:33.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 04:58:33.490: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 04:59:33.541: INFO: Waiting for terminating namespaces to be deleted... Nov 13 04:59:33.544: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 04:59:33.566: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 04:59:33.566: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 04:59:33.566: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 04:59:33.566: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 04:59:33.581: INFO: ComputeCPUMemFraction for node: node1 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:59:33.581: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 04:59:33.581: INFO: ComputeCPUMemFraction for node: node2 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.581: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.582: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.582: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.582: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:59:33.582: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Nov 13 04:59:33.599: INFO: ComputeCPUMemFraction for node: node1 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:59:33.599: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 04:59:33.599: INFO: ComputeCPUMemFraction for node: node2 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 04:59:33.599: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 04:59:33.599: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 04:59:33.614: INFO: Waiting for running... Nov 13 04:59:33.615: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 04:59:38.692: INFO: ComputeCPUMemFraction for node: node1 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 04:59:38.692: INFO: Node: node1, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 04:59:38.692: INFO: ComputeCPUMemFraction for node: node2 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Pod for on the node: 7b413250-29e8-4527-83bf-e2da7cb248e2-0, Cpu: 38400, Mem: 89350039552 Nov 13 04:59:38.692: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 04:59:38.692: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-aea303d0-2df8-47d7-bfcb=testing-taint-value-1ddecbc0-73a6-4111-a8de-0d0043bd1c94:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3d4f58cc-cee6-47f2-8c43=testing-taint-value-7765ff89-694f-4b81-8663-e71459141625:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2b489d15-5e24-4b67-afbb=testing-taint-value-7c38003c-aed9-414c-9711-90461651327c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bde126f8-eed8-4616-a611=testing-taint-value-7cfc6b6a-7a02-406d-9143-345df6a5d27c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a331db23-4a17-4d69-97ac=testing-taint-value-f8b3ec89-f815-4c6e-8005-ec7da6c29104:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-70f8113b-2900-46cb-96eb=testing-taint-value-4622214b-1b45-4e63-af49-2287b23ed75c:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-97524043-c814-4e8d-b7c9=testing-taint-value-3108ac44-c47d-4a05-8f68-620aa909b421:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5776c4a1-a9bf-41f7-a7c6=testing-taint-value-200c6728-1c96-44da-b0e2-99d026f12b4a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-733c1a89-6fb9-4cfd-b958=testing-taint-value-a7aa002c-8afe-4ce8-b6df-048c16d2dc54:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a6d2b016-ebbe-4336-abf7=testing-taint-value-0b894f83-d1c5-41be-ba9e-22e7fc2defcc:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3c5492a8-9ef5-42b0-be52=testing-taint-value-d3f1fdac-fb0d-4a57-bc9f-aee9c1d6ca44:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a713bd58-60a9-4c02-bb6a=testing-taint-value-106e3ff8-ee6c-49f6-93aa-13d4de7f4655:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c0fcd4f4-b4a2-46a9-b32c=testing-taint-value-2d3b739f-f446-429a-99c3-c1863f1ede9a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e2ef566d-4957-4f2a-925d=testing-taint-value-625a5720-ab32-4e59-8ea1-fb1befb02372:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b374c51f-3197-4ce2-962a=testing-taint-value-0c7dfa40-306b-4f40-b7e2-3c793e9df6fa:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-385ade54-9807-4421-b7b6=testing-taint-value-5489270a-4ac0-40a5-a3fc-f3c7f133b472:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-931ce6f3-173a-4583-8ba7=testing-taint-value-2e517dca-d7ac-42b6-a3c4-a90c4c056972:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-e44de92c-4f80-42ad-bb72=testing-taint-value-8b8b13de-abb2-4cfa-a558-deb0a9a8031e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ec8d12d2-f4a3-424b-bb71=testing-taint-value-ce8b4a4f-50dc-4a7b-9eec-b876348610c0:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-8087fba6-7000-4546-b822=testing-taint-value-bf25ca4e-38ea-40bc-864a-99d1accfac9b:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3c5492a8-9ef5-42b0-be52=testing-taint-value-d3f1fdac-fb0d-4a57-bc9f-aee9c1d6ca44:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a713bd58-60a9-4c02-bb6a=testing-taint-value-106e3ff8-ee6c-49f6-93aa-13d4de7f4655:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c0fcd4f4-b4a2-46a9-b32c=testing-taint-value-2d3b739f-f446-429a-99c3-c1863f1ede9a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e2ef566d-4957-4f2a-925d=testing-taint-value-625a5720-ab32-4e59-8ea1-fb1befb02372:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b374c51f-3197-4ce2-962a=testing-taint-value-0c7dfa40-306b-4f40-b7e2-3c793e9df6fa:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-385ade54-9807-4421-b7b6=testing-taint-value-5489270a-4ac0-40a5-a3fc-f3c7f133b472:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-931ce6f3-173a-4583-8ba7=testing-taint-value-2e517dca-d7ac-42b6-a3c4-a90c4c056972:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-e44de92c-4f80-42ad-bb72=testing-taint-value-8b8b13de-abb2-4cfa-a558-deb0a9a8031e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ec8d12d2-f4a3-424b-bb71=testing-taint-value-ce8b4a4f-50dc-4a7b-9eec-b876348610c0:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-8087fba6-7000-4546-b822=testing-taint-value-bf25ca4e-38ea-40bc-864a-99d1accfac9b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-aea303d0-2df8-47d7-bfcb=testing-taint-value-1ddecbc0-73a6-4111-a8de-0d0043bd1c94:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3d4f58cc-cee6-47f2-8c43=testing-taint-value-7765ff89-694f-4b81-8663-e71459141625:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2b489d15-5e24-4b67-afbb=testing-taint-value-7c38003c-aed9-414c-9711-90461651327c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bde126f8-eed8-4616-a611=testing-taint-value-7cfc6b6a-7a02-406d-9143-345df6a5d27c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a331db23-4a17-4d69-97ac=testing-taint-value-f8b3ec89-f815-4c6e-8005-ec7da6c29104:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-70f8113b-2900-46cb-96eb=testing-taint-value-4622214b-1b45-4e63-af49-2287b23ed75c:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-97524043-c814-4e8d-b7c9=testing-taint-value-3108ac44-c47d-4a05-8f68-620aa909b421:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5776c4a1-a9bf-41f7-a7c6=testing-taint-value-200c6728-1c96-44da-b0e2-99d026f12b4a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-733c1a89-6fb9-4cfd-b958=testing-taint-value-a7aa002c-8afe-4ce8-b6df-048c16d2dc54:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a6d2b016-ebbe-4336-abf7=testing-taint-value-0b894f83-d1c5-41be-ba9e-22e7fc2defcc:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:59:52.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-990" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:78.570 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":5,"skipped":1542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:59:52.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 04:59:52.071: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:00:52.129: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:00:52.131: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:00:52.152: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:00:52.152: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:00:52.152: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:00:52.152: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:00:52.168: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:00:52.168: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:00:52.168: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.168: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:00:52.168: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Nov 13 05:00:52.187: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:00:52.187: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:00:52.187: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:00:52.187: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:00:52.187: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 05:00:52.203: INFO: Waiting for running... Nov 13 05:00:52.204: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:00:57.274: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:00:57.274: INFO: Node: node1, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:00:57.274: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.274: INFO: Pod for on the node: 7cf97fe0-8b0f-4505-8be6-bd7f2f5d1c88-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:00:57.275: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:00:57.275: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3285 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-3285, will wait for the garbage collector to delete the pods Nov 13 05:01:03.451: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.107642ms Nov 13 05:01:03.552: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.978534ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:01:21.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3285" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:89.530 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":6,"skipped":2284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:01:21.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:01:21.604: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:01:21.612: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:01:21.618: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:01:21.625: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:01:21.625: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:01:21.625: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:01:21.625: INFO: Container discover ready: false, restart count 0 Nov 13 05:01:21.625: INFO: Container init ready: false, restart count 0 Nov 13 05:01:21.625: INFO: Container install ready: false, restart count 0 Nov 13 05:01:21.625: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:01:21.625: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:01:21.625: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:01:21.625: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:01:21.625: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:01:21.625: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:01:21.625: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.625: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:01:21.625: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:01:21.625: INFO: Container collectd ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:01:21.625: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:01:21.625: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:01:21.625: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:01:21.625: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container grafana ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:01:21.625: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:01:21.625: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:01:21.625: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:01:21.638: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:01:21.638: INFO: Container discover ready: false, restart count 0 Nov 13 05:01:21.638: INFO: Container init ready: false, restart count 0 Nov 13 05:01:21.638: INFO: Container install ready: false, restart count 0 Nov 13 05:01:21.638: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:01:21.638: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:01:21.638: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:01:21.638: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:01:21.638: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:01:21.638: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:01:21.638: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:01:21.638: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:01:21.638: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:01:21.638: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:01:21.638: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.638: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:01:21.639: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:01:21.639: INFO: Container collectd ready: true, restart count 0 Nov 13 05:01:21.639: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:01:21.639: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:01:21.639: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:01:21.639: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:01:21.639: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:01:21.639: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:01:21.639: INFO: Container tas-extender ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2f02bf32-28c2-4415-bf0c-33c81adc81ad 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2f02bf32-28c2-4415-bf0c-33c81adc81ad off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2f02bf32-28c2-4415-bf0c-33c81adc81ad [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:01:31.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8496" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:10.140 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":7,"skipped":2326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:01:31.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:01:31.750: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:01:31.759: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:01:31.761: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:01:31.776: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:01:31.776: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:01:31.776: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:01:31.776: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:01:31.776: INFO: Container discover ready: false, restart count 0 Nov 13 05:01:31.776: INFO: Container init ready: false, restart count 0 Nov 13 05:01:31.776: INFO: Container install ready: false, restart count 0 Nov 13 05:01:31.776: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.776: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:01:31.776: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.776: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:01:31.776: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.776: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:01:31.777: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.777: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:01:31.777: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.777: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:01:31.777: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.777: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:01:31.777: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.777: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:01:31.777: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:01:31.777: INFO: Container collectd ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:01:31.777: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:01:31.777: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:01:31.777: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:01:31.777: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container grafana ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:01:31.777: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:01:31.777: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:01:31.777: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:01:31.795: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:01:31.795: INFO: Container discover ready: false, restart count 0 Nov 13 05:01:31.795: INFO: Container init ready: false, restart count 0 Nov 13 05:01:31.795: INFO: Container install ready: false, restart count 0 Nov 13 05:01:31.795: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:01:31.795: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:01:31.795: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:01:31.795: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:01:31.795: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:01:31.795: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:01:31.795: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:01:31.795: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:01:31.795: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:01:31.795: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:01:31.795: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:01:31.795: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:01:31.795: INFO: Container collectd ready: true, restart count 0 Nov 13 05:01:31.795: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:01:31.795: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:01:31.795: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:01:31.795: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:01:31.795: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:01:31.795: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:01:31.795: INFO: with-labels from sched-pred-8496 started at 2021-11-13 05:01:25 +0000 UTC (1 container statuses recorded) Nov 13 05:01:31.795: INFO: Container with-labels ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Nov 13 05:01:31.830: INFO: Pod cmk-4tcdw requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.830: INFO: Pod cmk-qhvr7 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.830: INFO: Pod cmk-webhook-6c9d5f8578-2gp25 requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.830: INFO: Pod kube-flannel-mg66r requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.830: INFO: Pod kube-flannel-r7bbp requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.830: INFO: Pod kube-multus-ds-amd64-2wqj5 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.830: INFO: Pod kube-multus-ds-amd64-4wqsv requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.830: INFO: Pod kube-proxy-p6kbl requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.830: INFO: Pod kube-proxy-pzhf2 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod kubernetes-dashboard-785dcbb76d-w2mls requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod kubernetes-metrics-scraper-5558854cb-jmbpk requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod node-feature-discovery-worker-mm7xs requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod node-feature-discovery-worker-zgr4c requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod collectd-74xkn requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod collectd-mp2z6 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod node-exporter-hqkfs requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod node-exporter-hstd9 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod prometheus-operator-585ccfb458-qcz7s requesting local ephemeral resource =0 on Node node1 Nov 13 05:01:31.831: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-q7m54 requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Pod with-labels requesting local ephemeral resource =0 on Node node2 Nov 13 05:01:31.831: INFO: Using pod capacity: 40542413347 Nov 13 05:01:31.831: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Nov 13 05:01:31.831: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Nov 13 05:01:32.050: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b702752defc926], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b702764196081f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b70276648446f7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 586.030451ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b7027682ff3749], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b70276b39e0a06], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b702752e589530], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b70275e694c293], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b7027628f594ee], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.113631555s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b702764fc63d57], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b702766a6012d9], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b702753338b353], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b702778191158e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b70277d6bf5761], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.429089097s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b70277dd32913e], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b70277e4a8e977], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b7027533d41bc3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70277810f48fd], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70277c4be5547], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.135538299s] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70277cb41b830], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b70277d1fbe69d], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b70275345a2a10], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b70277659557f9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b702778b1e7755], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 629.737112ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b7027791af1d8f], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b70277981b2426], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b7027534f8b6d8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b702777cade9f1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b70277b247101b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 899.222178ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b70277b94339c5], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b70277c2bac5d6], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b702753582fc0b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b70277344f2a54], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b702774607ebfe], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 297.312736ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b702774c65a09c], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b70277548b40c7], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b7027536083c26], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b70277387e5686], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b702776db3dba6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 892.693498ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b7027774990d19], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b702777b87b197], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b7027536837801], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b7027738b7f82d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b70277828ef80f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.238820132s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b70277894165ee], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b702779052fa8f], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b70275370d0c23], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b702773451afb7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b702775a7bb8e1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 640.282138ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b702776164310f], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b702776bd9f26a], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b7027537b1bf08], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b70276dabb329a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b70276faedb152], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 540.164391ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b702771936f485], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b70277398c847c], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b702753830beeb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b7027738ba4b03], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b702779653dabe], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.570339406s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b702779cd8b46c], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b70277a3e2a275], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b702752ef0d04e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-2 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b7027686610c61], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b70276bd32ee5c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 919.717718ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b70276e1e80c18], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b702773ce32c53], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b702752f78eeab], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b70275e3118c25], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b702761548ddea], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 842.477413ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b702763d48d792], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b70276740a39e4], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b70275300e325f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b702769afc6f65], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b70276ac0339b5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 285.646292ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b70276d154891b], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b7027737a68a3d], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b702753099d191], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b70276e1b3201b], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b70276f795bb8d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 367.162841ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b702773e6b98d4], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b702777dcfea0c], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b70275311fc687], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-6 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b7027667186b46], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b702767ce88877], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 365.953184ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b70276a8165ba2], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b70276dd7fca44], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b7027531a5a763], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b702767b126e8f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b70276a4914ff6], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 696.172874ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b70276b72b0201], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b70276e8d37d4a], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b70275322a33a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b7027765956247], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b702779ee81234], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 961.714548ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b70277a5b948d8], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b70277ac9f15c3], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b7027532addb93], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1227/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b7027679a0af3f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b7027690a3957a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 386.055229ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b70276bbb6929e], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b702770ddacde2], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b70279e4a95959], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:01:53.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1227" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.420 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":8,"skipped":2617,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:01:53.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Nov 13 05:01:53.178: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:02:53.233: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:03:27.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2279" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:94.396 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":9,"skipped":2643,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:03:27.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:03:27.570: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:03:27.579: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:03:27.582: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:03:27.594: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:03:27.594: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:03:27.594: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:03:27.594: INFO: Container discover ready: false, restart count 0 Nov 13 05:03:27.594: INFO: Container init ready: false, restart count 0 Nov 13 05:03:27.594: INFO: Container install ready: false, restart count 0 Nov 13 05:03:27.594: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:03:27.594: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:03:27.594: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:03:27.594: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:03:27.594: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:03:27.594: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:03:27.594: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:03:27.594: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:03:27.594: INFO: Container collectd ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:03:27.594: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:03:27.594: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:03:27.594: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:03:27.594: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container grafana ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:03:27.594: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:03:27.594: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:03:27.594: INFO: low-1 from sched-preemption-2279 started at 2021-11-13 05:03:07 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container low-1 ready: true, restart count 0 Nov 13 05:03:27.594: INFO: medium from sched-preemption-2279 started at 2021-11-13 05:03:23 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.594: INFO: Container medium ready: true, restart count 0 Nov 13 05:03:27.594: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:03:27.601: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:03:27.601: INFO: Container discover ready: false, restart count 0 Nov 13 05:03:27.601: INFO: Container init ready: false, restart count 0 Nov 13 05:03:27.601: INFO: Container install ready: false, restart count 0 Nov 13 05:03:27.601: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:03:27.601: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:03:27.601: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:03:27.601: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:03:27.601: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:03:27.601: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:03:27.601: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:03:27.601: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:03:27.601: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:03:27.601: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:03:27.601: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:03:27.601: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:03:27.601: INFO: Container collectd ready: true, restart count 0 Nov 13 05:03:27.601: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:03:27.601: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:03:27.601: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:03:27.601: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:27.601: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:03:27.601: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.601: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:03:27.602: INFO: high from sched-preemption-2279 started at 2021-11-13 05:03:02 +0000 UTC (1 container statuses recorded) Nov 13 05:03:27.602: INFO: Container high ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:03:43.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9830" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.176 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":10,"skipped":2827,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:03:43.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:03:43.752: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:03:43.760: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:03:43.764: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:03:43.772: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:03:43.772: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:03:43.772: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:03:43.772: INFO: Container discover ready: false, restart count 0 Nov 13 05:03:43.772: INFO: Container init ready: false, restart count 0 Nov 13 05:03:43.772: INFO: Container install ready: false, restart count 0 Nov 13 05:03:43.772: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:03:43.772: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:03:43.772: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:03:43.772: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:03:43.772: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:03:43.772: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:03:43.772: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:03:43.772: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:03:43.772: INFO: Container collectd ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:03:43.772: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:03:43.772: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:03:43.772: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:03:43.772: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container grafana ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:03:43.772: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:03:43.772: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:03:43.772: INFO: rs-e2e-pts-filter-tdmf9 from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:03:43.772: INFO: rs-e2e-pts-filter-wqb84 from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.772: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:03:43.772: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:03:43.796: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:03:43.796: INFO: Container discover ready: false, restart count 0 Nov 13 05:03:43.796: INFO: Container init ready: false, restart count 0 Nov 13 05:03:43.796: INFO: Container install ready: false, restart count 0 Nov 13 05:03:43.796: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:03:43.796: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:03:43.796: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:03:43.796: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:03:43.796: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:03:43.796: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:03:43.796: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:03:43.796: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:03:43.796: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:03:43.796: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:03:43.796: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:03:43.796: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:03:43.796: INFO: Container collectd ready: true, restart count 0 Nov 13 05:03:43.796: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:03:43.796: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:03:43.796: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:03:43.796: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:43.796: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:03:43.796: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:03:43.796: INFO: rs-e2e-pts-filter-k8gtv from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:03:43.796: INFO: rs-e2e-pts-filter-x4j7t from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:43.796: INFO: Container e2e-pts-filter ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8ba1d60e-9b46-41c4-a6a6-65070c8f793a=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-7dad32b4-1b28-4057-973e-6b2a50330802 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70293e5a6b48a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9998/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294391093af], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b702944a84115b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 292.775333ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294510c235a], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294581d2f14], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294d6fa5baf], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b70294d795f8ab], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-8ba1d60e-9b46-41c4-a6a6-65070c8f793a: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [without-toleration.16b70294d7b1d40b], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-x26dd" : object "sched-pred-9998"/"kube-root-ca.crt" not registered] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b70294d795f8ab], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-8ba1d60e-9b46-41c4-a6a6-65070c8f793a: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70293e5a6b48a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9998/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294391093af], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b702944a84115b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 292.775333ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294510c235a], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294581d2f14], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b70294d6fa5baf], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [without-toleration.16b70294d7b1d40b], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "kube-api-access-x26dd" : object "sched-pred-9998"/"kube-root-ca.crt" not registered] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8ba1d60e-9b46-41c4-a6a6-65070c8f793a=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b702951c7adb36], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9998/still-no-tolerations to node1] STEP: removing the label kubernetes.io/e2e-label-key-7dad32b4-1b28-4057-973e-6b2a50330802 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-7dad32b4-1b28-4057-973e-6b2a50330802 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8ba1d60e-9b46-41c4-a6a6-65070c8f793a=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:03:49.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9998" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.199 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":11,"skipped":3279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:03:49.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Nov 13 05:03:49.961: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 13 05:03:49.969: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:03:49.972: INFO: Logging pods the apiserver thinks is on node node1 before test Nov 13 05:03:49.999: INFO: cmk-4tcdw from kube-system started at 2021-11-12 21:21:00 +0000 UTC (2 container statuses recorded) Nov 13 05:03:49.999: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:03:49.999: INFO: cmk-init-discover-node1-vkj2s from kube-system started at 2021-11-12 21:20:18 +0000 UTC (3 container statuses recorded) Nov 13 05:03:49.999: INFO: Container discover ready: false, restart count 0 Nov 13 05:03:49.999: INFO: Container init ready: false, restart count 0 Nov 13 05:03:49.999: INFO: Container install ready: false, restart count 0 Nov 13 05:03:49.999: INFO: cmk-webhook-6c9d5f8578-2gp25 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 05:03:49.999: INFO: kube-flannel-r7bbp from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 05:03:49.999: INFO: kube-multus-ds-amd64-4wqsv from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:03:49.999: INFO: kube-proxy-p6kbl from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 05:03:49.999: INFO: nginx-proxy-node1 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:03:49.999: INFO: node-feature-discovery-worker-zgr4c from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:03:49.999: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:03:49.999: INFO: collectd-74xkn from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:03:49.999: INFO: Container collectd ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:03:49.999: INFO: node-exporter-hqkfs from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:03:49.999: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:03:49.999: INFO: prometheus-k8s-0 from monitoring started at 2021-11-12 21:22:14 +0000 UTC (4 container statuses recorded) Nov 13 05:03:49.999: INFO: Container config-reloader ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container grafana ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container prometheus ready: true, restart count 1 Nov 13 05:03:49.999: INFO: prometheus-operator-585ccfb458-qcz7s from monitoring started at 2021-11-12 21:21:55 +0000 UTC (2 container statuses recorded) Nov 13 05:03:49.999: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:49.999: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 05:03:49.999: INFO: rs-e2e-pts-filter-tdmf9 from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:03:49.999: INFO: rs-e2e-pts-filter-wqb84 from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:03:49.999: INFO: still-no-tolerations from sched-pred-9998 started at 2021-11-13 05:03:49 +0000 UTC (1 container statuses recorded) Nov 13 05:03:49.999: INFO: Container still-no-tolerations ready: false, restart count 0 Nov 13 05:03:49.999: INFO: Logging pods the apiserver thinks is on node node2 before test Nov 13 05:03:50.012: INFO: cmk-init-discover-node2-5f4hp from kube-system started at 2021-11-12 21:20:38 +0000 UTC (3 container statuses recorded) Nov 13 05:03:50.012: INFO: Container discover ready: false, restart count 0 Nov 13 05:03:50.012: INFO: Container init ready: false, restart count 0 Nov 13 05:03:50.012: INFO: Container install ready: false, restart count 0 Nov 13 05:03:50.012: INFO: cmk-qhvr7 from kube-system started at 2021-11-12 21:21:01 +0000 UTC (2 container statuses recorded) Nov 13 05:03:50.012: INFO: Container nodereport ready: true, restart count 0 Nov 13 05:03:50.012: INFO: Container reconcile ready: true, restart count 0 Nov 13 05:03:50.012: INFO: kube-flannel-mg66r from kube-system started at 2021-11-12 21:08:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 05:03:50.012: INFO: kube-multus-ds-amd64-2wqj5 from kube-system started at 2021-11-12 21:08:45 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kube-multus ready: true, restart count 1 Nov 13 05:03:50.012: INFO: kube-proxy-pzhf2 from kube-system started at 2021-11-12 21:07:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 05:03:50.012: INFO: kubernetes-dashboard-785dcbb76d-w2mls from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 05:03:50.012: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk from kube-system started at 2021-11-12 21:09:15 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 05:03:50.012: INFO: nginx-proxy-node2 from kube-system started at 2021-11-12 21:07:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 05:03:50.012: INFO: node-feature-discovery-worker-mm7xs from kube-system started at 2021-11-12 21:16:36 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 05:03:50.012: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh from kube-system started at 2021-11-12 21:17:59 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 05:03:50.012: INFO: collectd-mp2z6 from monitoring started at 2021-11-12 21:25:58 +0000 UTC (3 container statuses recorded) Nov 13 05:03:50.012: INFO: Container collectd ready: true, restart count 0 Nov 13 05:03:50.012: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 05:03:50.012: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 05:03:50.012: INFO: node-exporter-hstd9 from monitoring started at 2021-11-12 21:22:03 +0000 UTC (2 container statuses recorded) Nov 13 05:03:50.012: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 05:03:50.012: INFO: Container node-exporter ready: true, restart count 0 Nov 13 05:03:50.012: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 from monitoring started at 2021-11-12 21:25:09 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container tas-extender ready: true, restart count 0 Nov 13 05:03:50.012: INFO: rs-e2e-pts-filter-k8gtv from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container e2e-pts-filter ready: true, restart count 0 Nov 13 05:03:50.012: INFO: rs-e2e-pts-filter-x4j7t from sched-pred-9830 started at 2021-11-13 05:03:39 +0000 UTC (1 container statuses recorded) Nov 13 05:03:50.012: INFO: Container e2e-pts-filter ready: true, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6.16b70296498b2ccc], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6.16b7029912a56cc6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6656/filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6 to node1] STEP: Considering event: Type = [Normal], Name = [filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6.16b7029977ac46a8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6.16b7029988d4347b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 287.821678ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6.16b70299909561e5], Reason = [Created], Message = [Created container filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6] STEP: Considering event: Type = [Normal], Name = [filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6.16b7029997639527], Reason = [Started], Message = [Started container filler-pod-19e007de-423f-4929-9d7e-41f43a8e7eb6] STEP: Considering event: Type = [Normal], Name = [without-label.16b70295591872ad], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6656/without-label to node1] STEP: Considering event: Type = [Normal], Name = [without-label.16b70295d2d5ac31], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b70295e4a347e4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 298.679884ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b70295eb4a9acb], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b70295f36296c2], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b7029648dffb4e], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod816d7969-0077-48cb-b79a-36b33a11a4d7.16b7029a04a73ce8], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:04:11.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6656" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.190 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":12,"skipped":4257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 05:04:11.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Nov 13 05:04:11.169: INFO: Waiting up to 1m0s for all nodes to be ready Nov 13 05:05:11.224: INFO: Waiting for terminating namespaces to be deleted... Nov 13 05:05:11.227: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 05:05:11.248: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 05:05:11.248: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 05:05:11.248: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 05:05:11.248: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 05:05:11.266: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:05:11.266: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:05:11.266: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:11.266: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:05:11.266: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Nov 13 05:05:19.363: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:05:19.363: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.363: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.363: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.363: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:05:19.364: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Nov 13 05:05:19.364: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-q7m54, Cpu: 100, Mem: 209715200 Nov 13 05:05:19.364: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Nov 13 05:05:19.364: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Nov 13 05:05:19.375: INFO: Waiting for running... Nov 13 05:05:19.378: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:05:24.449: INFO: ComputeCPUMemFraction for node: node2 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:05:24.449: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Nov 13 05:05:24.449: INFO: ComputeCPUMemFraction for node: node1 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Pod for on the node: 58a2f9cc-f392-48e6-8a2a-277331e46190-0, Cpu: 38400, Mem: 89350041600 Nov 13 05:05:24.449: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Nov 13 05:05:24.449: INFO: Node: node1, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884632576, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 05:05:42.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1951" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:91.390 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":13,"skipped":5589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSNov 13 05:05:42.541: INFO: Running AfterSuite actions on all nodes Nov 13 05:05:42.541: INFO: Running AfterSuite actions on node 1 Nov 13 05:05:42.541: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5757,"failed":0} Ran 13 of 5770 Specs in 538.036 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5757 Skipped PASS Ginkgo ran 1 suite in 8m59.327175025s Test Suite Passed