I0828 02:37:18.897799 22 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0828 02:37:18.897946 22 e2e.go:129] Starting e2e run "a526126f-bb96-4786-918c-bc8e7d180b69" on Ginkgo node 1 {"msg":"Test Suite starting","total":12,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1630118237 - Will randomize all specs Will run 12 of 5484 specs Aug 28 02:37:18.933: INFO: >>> kubeConfig: /root/.kube/config Aug 28 02:37:18.938: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 28 02:37:18.969: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 02:37:19.029: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 28 02:37:19.029: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 28 02:37:19.029: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 02:37:19.029: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Aug 28 02:37:19.029: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 28 02:37:19.039: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Aug 28 02:37:19.039: INFO: e2e test version: v1.19.14 Aug 28 02:37:19.039: INFO: kube-apiserver version: v1.19.8 Aug 28 02:37:19.039: INFO: >>> kubeConfig: /root/.kube/config Aug 28 02:37:19.045: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:37:19.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Aug 28 02:37:19.073: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Aug 28 02:37:19.076: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:37:19.078: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:37:19.086: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:37:19.088: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:37:19.098: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:37:19.098: INFO: Container discover ready: false, restart count 0 Aug 28 02:37:19.098: INFO: Container init ready: false, restart count 0 Aug 28 02:37:19.098: INFO: Container install ready: false, restart count 0 Aug 28 02:37:19.098: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:37:19.098: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:37:19.098: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:37:19.098: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:37:19.098: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:37:19.098: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:37:19.098: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:37:19.098: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:37:19.098: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:37:19.098: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:37:19.098: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:37:19.098: INFO: Container collectd ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:37:19.098: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:37:19.098: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:37:19.098: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:37:19.098: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container grafana ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:37:19.098: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:37:19.098: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:37:19.105: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:37:19.105: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:37:19.105: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:37:19.105: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:37:19.105: INFO: Container discover ready: false, restart count 0 Aug 28 02:37:19.105: INFO: Container init ready: false, restart count 0 Aug 28 02:37:19.105: INFO: Container install ready: false, restart count 0 Aug 28 02:37:19.105: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:37:19.106: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:37:19.106: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:37:19.106: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:37:19.106: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:37:19.106: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:37:19.106: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:37:19.106: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:37:19.106: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:37:19.106: INFO: Container collectd ready: true, restart count 0 Aug 28 02:37:19.106: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:37:19.106: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:37:19.106: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:37:19.106: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:37:19.106: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:37:19.106: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:37:19.106: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:37:19.106: INFO: Container tas-extender ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.169f57e6d9e18aea], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:37:20.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6421" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":12,"completed":1,"skipped":249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:37:20.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Aug 28 02:37:20.184: INFO: Waiting up to 1m0s for all nodes to be ready Aug 28 02:38:20.242: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:38:20.245: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 02:38:20.264: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 28 02:38:20.264: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 28 02:38:20.264: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 02:38:20.264: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Aug 28 02:38:24.306: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:38:24.306: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:38:24.306: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:24.306: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:38:24.306: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:38:24.317: INFO: Waiting for running... Aug 28 02:38:29.382: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:38:34.451: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:38:34.451: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:38:34.451: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Aug 28 02:38:34.451: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:38:34.451: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:38:48.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-7931" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:88.338 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:160 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":12,"completed":2,"skipped":1059,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:38:48.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 28 02:38:48.535: INFO: Waiting up to 1m0s for all nodes to be ready Aug 28 02:39:48.587: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:307 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node1. STEP: Apply 10 fake resource to node node2. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:325 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:40:26.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5765" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:98.401 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:301 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:337 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":12,"completed":3,"skipped":1229,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:40:26.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:40:26.931: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:40:26.939: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:40:26.941: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:40:26.948: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:40:26.948: INFO: Container discover ready: false, restart count 0 Aug 28 02:40:26.948: INFO: Container init ready: false, restart count 0 Aug 28 02:40:26.948: INFO: Container install ready: false, restart count 0 Aug 28 02:40:26.948: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:40:26.948: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:40:26.948: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:40:26.948: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.948: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:40:26.948: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.948: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:40:26.948: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.948: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:40:26.948: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.948: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:40:26.948: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.948: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:40:26.948: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.948: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:40:26.948: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.949: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:40:26.949: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.949: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:40:26.949: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:40:26.949: INFO: Container collectd ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:40:26.949: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:40:26.949: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:40:26.949: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:40:26.949: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Container grafana ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:40:26.949: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:40:26.949: INFO: high from sched-preemption-5765 started at 2021-08-28 02:40:01 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.949: INFO: Container high ready: true, restart count 0 Aug 28 02:40:26.949: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:40:26.959: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:40:26.959: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:40:26.959: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:40:26.959: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:40:26.959: INFO: Container discover ready: false, restart count 0 Aug 28 02:40:26.959: INFO: Container init ready: false, restart count 0 Aug 28 02:40:26.959: INFO: Container install ready: false, restart count 0 Aug 28 02:40:26.959: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.959: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:40:26.959: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.959: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:40:26.959: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.959: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:40:26.959: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.959: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:40:26.959: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.959: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:40:26.960: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.960: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:40:26.960: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.960: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:40:26.960: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:40:26.960: INFO: Container collectd ready: true, restart count 0 Aug 28 02:40:26.960: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:40:26.960: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:40:26.960: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:40:26.960: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:40:26.960: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:40:26.960: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:40:26.960: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:40:26.960: INFO: Container tas-extender ready: true, restart count 0 Aug 28 02:40:26.960: INFO: low-1 from sched-preemption-5765 started at 2021-08-28 02:40:07 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.960: INFO: Container low-1 ready: true, restart count 0 Aug 28 02:40:26.960: INFO: medium from sched-preemption-5765 started at 2021-08-28 02:40:21 +0000 UTC (1 container statuses recorded) Aug 28 02:40:26.960: INFO: Container medium ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Aug 28 02:40:26.993: INFO: Pod cmk-fzjgr requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod cmk-jw4m6 requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod cmk-webhook-6c9d5f8578-ndbx2 requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod kube-flannel-ssxn7 requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod kube-flannel-t9qv4 requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod kube-multus-ds-amd64-nn7bl requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod kube-multus-ds-amd64-tfffk requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod kube-proxy-pb5bl requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod kube-proxy-r4q4t requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod kubernetes-dashboard-86c6f9df5b-c56fg requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod kubernetes-metrics-scraper-678c97765c-gtp5x requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod node-feature-discovery-worker-54lfh requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod node-feature-discovery-worker-bd9kg requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod collectd-64dp2 requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod collectd-ccvwg requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod node-exporter-4cvlq requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod node-exporter-p6h5h requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod tas-telemetry-aware-scheduling-575ccbc9d4-v99df requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod high requesting local ephemeral resource =0 on Node node1 Aug 28 02:40:26.993: INFO: Pod low-1 requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Pod medium requesting local ephemeral resource =0 on Node node2 Aug 28 02:40:26.993: INFO: Using pod capacity: 40542413347 Aug 28 02:40:26.993: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 Aug 28 02:40:26.993: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Aug 28 02:40:27.186: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f581297bb11b7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-0 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f5813e4f15300], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.35/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f581424d9c876], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f581440291c00], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 458.16699ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f58145f439015], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.169f5814c34141dc], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f58129844d767], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f5814df9fd312], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.109/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f5814e04bff18], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f5815d0c36aab], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 4.034347922s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f5815d6ff7925], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.169f5815dcf9357d], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f58129d522789], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-10 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f5814bfbe80f5], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.37/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f5814c0f06507], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f581527081492], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.712820683s] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f58152d7ef3b0], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.169f581534230bd4], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f58129df237fc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f5814d101e4a9], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.105/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f5814def405ed], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f58151e79d5dc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.065724416s] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f581523d42672], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.169f58152a3fc58c], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f58129e7f52d2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f58143e8dda0b], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.36/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f5814aff30745], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f5814ce4bdfab], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 509.131027ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f5814d63171b7], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.169f5814dd3e6220], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f58129f1d9458], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-13 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f58141e01a430], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.34/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f58143db12ca6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f58145ef52acf], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 558.096053ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f581479ca081c], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.169f5814c6420f29], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f58129fb9da7f], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f5814d0e9c5e5], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.104/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f5814def51e92], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f58153d3bae1e], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.581663813s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f581542a70375], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.169f5815489b3001], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f5812a03cfe80], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-15 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f5814a491a66b], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.101/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f5814be41e835], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f5814e50f3a90], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 650.981203ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f5814f346e6ad], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.169f581500a639ff], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f5812a0c84f37], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-16 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f5814df3c4405], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.108/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f5814e00bccb9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f5815b3a57b0a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.550056729s] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f5815b92abfe1], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.169f5815bf1b7f64], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f5812a1487118], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f5814c13b56dc], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.42/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f5814c2224330], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f581562ed5ce8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.697657052s] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f58156951c053], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.169f58156f663857], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f5812a1ec825c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f5814c12c8df2], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.43/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f5814c2680056], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f58157f28095a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.166695739s] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f581585fc9f09], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.169f58158c9ebe80], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f5812a275c77c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f5814bec2888e], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.38/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f5814c04e3b08], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f581509d24bea], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 1.233384814s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f5815102be218], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.169f581516693aed], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f581298e6a617], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f5814c1cb27db], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.41/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f5814c29388a9], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f58159e84b366], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.690009304s] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f5815a5b94409], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.169f5815ab903534], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f5812996edb38], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-3 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f5814a4b08384], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.106/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f5814cc28fad2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f58150155bbe8], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 892.117311ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f581511d35ae4], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.169f58151776a0bd], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f581299f7d0b1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f5814bfea877c], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.39/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f5814c1497d03], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f581543ad56ae], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.18757504s] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f58154a45494e], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.169f58154fe75cd8], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f58129a83b897], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f5814a4fb459d], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.102/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f5814a974fd6d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f5814c5772682], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 469.895806ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f5814d7e97d12], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.169f5814e41ff660], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f58129b1493fc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f5814df05b969], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.107/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f5814e007905e], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f581596717235], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 3.060384733s] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f58159c9e268c], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.169f5815a2da1f54], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f58129ba2078a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-7 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f5814d17ef0b5], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.103/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f5814def6b3ea], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f58155b1117cc], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.082096371s] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f5815619a4925], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.169f581567bb944e], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f58129c2addbe], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f5814df32d78b], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.110/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f5814dfea475f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f581578732de1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 2.559093982s] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f58157e732262], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.169f581584ed8dd6], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f58129cc1ac0b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5209/overcommit-9 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f5814bebfceb1], Reason = [AddedInterface], Message = [Add eth0 [10.244.3.40/24]] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f5814bfccfc29], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f5814eaf610d3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 724.102748ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f5814f1416e49], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.169f5814f782d4ca], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.169f58174f451564], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:40:48.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5209" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:21.375 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":12,"completed":4,"skipped":1540,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:40:48.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:40:48.322: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:40:48.331: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:40:48.333: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:40:48.352: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:40:48.352: INFO: Container discover ready: false, restart count 0 Aug 28 02:40:48.352: INFO: Container init ready: false, restart count 0 Aug 28 02:40:48.352: INFO: Container install ready: false, restart count 0 Aug 28 02:40:48.352: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:40:48.352: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:40:48.352: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:40:48.352: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:40:48.352: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:40:48.352: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:40:48.352: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:40:48.352: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:40:48.352: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:40:48.352: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:40:48.352: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.352: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:40:48.352: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:40:48.352: INFO: Container collectd ready: true, restart count 0 Aug 28 02:40:48.352: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:40:48.353: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:40:48.353: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:40:48.353: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:40:48.353: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:40:48.353: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:40:48.353: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:40:48.353: INFO: Container grafana ready: true, restart count 0 Aug 28 02:40:48.353: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:40:48.353: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:40:48.353: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-0 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-0 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-10 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-10 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-12 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-12 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-13 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-13 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-17 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-17 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-18 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-18 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-19 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-19 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-2 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-2 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-4 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-4 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: overcommit-9 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.353: INFO: Container overcommit-9 ready: true, restart count 0 Aug 28 02:40:48.353: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:40:48.367: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:40:48.367: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:40:48.367: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:40:48.367: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:40:48.367: INFO: Container discover ready: false, restart count 0 Aug 28 02:40:48.367: INFO: Container init ready: false, restart count 0 Aug 28 02:40:48.367: INFO: Container install ready: false, restart count 0 Aug 28 02:40:48.367: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:40:48.368: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:40:48.368: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:40:48.368: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:40:48.368: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:40:48.368: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:40:48.368: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:40:48.368: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:40:48.368: INFO: Container collectd ready: true, restart count 0 Aug 28 02:40:48.368: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:40:48.368: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:40:48.368: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:40:48.368: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:40:48.368: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:40:48.368: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:40:48.368: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:40:48.368: INFO: Container tas-extender ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-1 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-1 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-11 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-11 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-14 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-14 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-15 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-15 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-16 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-16 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-3 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-3 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-5 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-5 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-6 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-6 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-7 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-7 ready: true, restart count 0 Aug 28 02:40:48.368: INFO: overcommit-8 from sched-pred-5209 started at 2021-08-28 02:40:27 +0000 UTC (1 container statuses recorded) Aug 28 02:40:48.368: INFO: Container overcommit-8 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-df24cf7d-df0a-4cbf-8cf8-38d29a14716a=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-6681e8f0-afbb-438c-b403-7aa995ed4914 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.169f581791b3f891], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3618/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5817f767222f], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.111/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5817f82d8d26], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5818135054ca], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 455.246712ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5818198a50fb], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f58182023e74f], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5818819b5e40], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.169f58188359f628], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-df24cf7d-df0a-4cbf-8cf8-38d29a14716a: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f58189c90e91c], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.169f58188359f628], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) had taint {kubernetes.io/e2e-taint-key-df24cf7d-df0a-4cbf-8cf8-38d29a14716a: testing-taint-value}, that the pod didn't tolerate, 4 node(s) didn't match node selector.] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f581791b3f891], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3618/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5817f767222f], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.111/24]] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5817f82d8d26], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5818135054ca], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 455.246712ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5818198a50fb], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f58182023e74f], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f5818819b5e40], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.169f58189c90e91c], Reason = [SandboxChanged], Message = [Pod sandbox changed, it will be killed and re-created.] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-df24cf7d-df0a-4cbf-8cf8-38d29a14716a=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.169f58192c7845f1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3618/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-6681e8f0-afbb-438c-b403-7aa995ed4914 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-6681e8f0-afbb-438c-b403-7aa995ed4914 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-df24cf7d-df0a-4cbf-8cf8-38d29a14716a=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:40:55.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3618" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.189 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":12,"completed":5,"skipped":2659,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:40:55.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Aug 28 02:40:55.516: INFO: Waiting up to 1m0s for all nodes to be ready Aug 28 02:41:55.570: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:41:55.573: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 02:41:55.592: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 28 02:41:55.592: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 28 02:41:55.592: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 02:41:55.592: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 Aug 28 02:41:55.608: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.608: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:41:55.609: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:41:55.609: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:41:55.609: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:41:55.609: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:41:55.623: INFO: Waiting for running... Aug 28 02:42:00.693: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:42:05.763: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Aug 28 02:42:05.763: INFO: Node: node1, totalRequestedMemResource: 1250829250560, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:42:05.763: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Pod for on the node: e67d201d-ce5a-4efb-999a-50a21dd1a0e3-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:42:05.763: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Aug 28 02:42:05.763: INFO: Node: node2, totalRequestedMemResource: 1161491793920, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-fe5d03d5-e1a7-4ae9-a841-7923ca08cb9d=testing-taint-value-c37142dd-f8a6-4d64-8c58-e89ee0dd2300:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-5e692955-9d9c-44ee-ae3b-bd538fb7b4c7=testing-taint-value-ee706ca2-02ca-4235-9242-309fcc2a4fff:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-28c71984-6d48-4ab4-8860-cf0cc239b784=testing-taint-value-72922a32-9d0e-4aa7-abd1-dc9d9f329374:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b4885d8e-4118-421a-8b6c-67da4e73f927=testing-taint-value-7d8e88fa-2558-45b3-982c-b5a99364776b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-b8e35512-2a02-4cd2-a7f9-1ea8352bffdd=testing-taint-value-37f5ccec-fa93-40c8-9fd6-345dc82208ba:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-034d151b-9a9f-4936-874c-7d08c0472517=testing-taint-value-eaec1d2a-5596-442a-8fbc-109809bd4577:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3a0e8233-61a6-4175-9ac3-1ec8fd037e7c=testing-taint-value-618a34ca-a8c5-4db8-b3c8-1c2260312a6a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-2328aea6-b999-4e4c-9d64-d55ab445087c=testing-taint-value-6fe0c5de-d0ae-4d95-bd43-2c3d9530dce4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-40ae47f0-8a7b-452a-8056-9561179bd132=testing-taint-value-59765a32-288e-4632-8a79-ecc8006cd1db:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-36200aa4-91a0-4eb6-ae17-40210da45794=testing-taint-value-0c0930c9-3a63-4f5d-b3ce-61cbba5165ce:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-8f5900a1-b46f-42ff-99ed-6d85df3066d5=testing-taint-value-2a8618ca-99c0-4e6d-94d4-2952d46ff899:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-cb14483e-d44d-47cc-b050-b1bd0e193eac=testing-taint-value-8b5c0d23-f73d-4489-b045-48d1a4eadbf8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-44a48c2c-3a37-4351-83e5-647733e8144d=testing-taint-value-4bac77dd-fdae-4d61-a6ab-15fef0544f7f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-01d3f4f0-3b61-4186-baf7-bcd609eb0359=testing-taint-value-008c24d7-a097-4ba2-b3bd-dfab980f92c6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-3d9120d9-5688-4fd7-b68e-81254331d326=testing-taint-value-1c6ba8ac-3591-4ab7-ab50-59c99a1dff2f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f2cd2a56-7b3d-4f44-b061-a05c202a56ad=testing-taint-value-63394209-c093-4157-9058-9047b5552634:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-78e804f2-8d00-4a09-a1a9-d02a21536321=testing-taint-value-798a32d4-ea06-4e71-9d4e-b154131cd810:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-6f9f5936-5013-4a66-ba6d-734a34b245a2=testing-taint-value-8870459b-74e7-4a18-ac0c-dedca4c03a6d:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-7faf7971-73fd-4874-a970-d78ac76a3d55=testing-taint-value-cc9514bb-3b75-497b-b24d-c2d22b4f23f7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-e82f0c4d-c918-42fd-a455-8ebdfb7dcea4=testing-taint-value-522eadfb-a6e3-4795-8ca5-3411489cc632:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-e82f0c4d-c918-42fd-a455-8ebdfb7dcea4=testing-taint-value-522eadfb-a6e3-4795-8ca5-3411489cc632:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-7faf7971-73fd-4874-a970-d78ac76a3d55=testing-taint-value-cc9514bb-3b75-497b-b24d-c2d22b4f23f7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-6f9f5936-5013-4a66-ba6d-734a34b245a2=testing-taint-value-8870459b-74e7-4a18-ac0c-dedca4c03a6d:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-78e804f2-8d00-4a09-a1a9-d02a21536321=testing-taint-value-798a32d4-ea06-4e71-9d4e-b154131cd810:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f2cd2a56-7b3d-4f44-b061-a05c202a56ad=testing-taint-value-63394209-c093-4157-9058-9047b5552634:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3d9120d9-5688-4fd7-b68e-81254331d326=testing-taint-value-1c6ba8ac-3591-4ab7-ab50-59c99a1dff2f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-01d3f4f0-3b61-4186-baf7-bcd609eb0359=testing-taint-value-008c24d7-a097-4ba2-b3bd-dfab980f92c6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-44a48c2c-3a37-4351-83e5-647733e8144d=testing-taint-value-4bac77dd-fdae-4d61-a6ab-15fef0544f7f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-cb14483e-d44d-47cc-b050-b1bd0e193eac=testing-taint-value-8b5c0d23-f73d-4489-b045-48d1a4eadbf8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-8f5900a1-b46f-42ff-99ed-6d85df3066d5=testing-taint-value-2a8618ca-99c0-4e6d-94d4-2952d46ff899:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-36200aa4-91a0-4eb6-ae17-40210da45794=testing-taint-value-0c0930c9-3a63-4f5d-b3ce-61cbba5165ce:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-40ae47f0-8a7b-452a-8056-9561179bd132=testing-taint-value-59765a32-288e-4632-8a79-ecc8006cd1db:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-2328aea6-b999-4e4c-9d64-d55ab445087c=testing-taint-value-6fe0c5de-d0ae-4d95-bd43-2c3d9530dce4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-3a0e8233-61a6-4175-9ac3-1ec8fd037e7c=testing-taint-value-618a34ca-a8c5-4db8-b3c8-1c2260312a6a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-034d151b-9a9f-4936-874c-7d08c0472517=testing-taint-value-eaec1d2a-5596-442a-8fbc-109809bd4577:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b8e35512-2a02-4cd2-a7f9-1ea8352bffdd=testing-taint-value-37f5ccec-fa93-40c8-9fd6-345dc82208ba:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-b4885d8e-4118-421a-8b6c-67da4e73f927=testing-taint-value-7d8e88fa-2558-45b3-982c-b5a99364776b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-28c71984-6d48-4ab4-8860-cf0cc239b784=testing-taint-value-72922a32-9d0e-4aa7-abd1-dc9d9f329374:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-5e692955-9d9c-44ee-ae3b-bd538fb7b4c7=testing-taint-value-ee706ca2-02ca-4235-9242-309fcc2a4fff:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-fe5d03d5-e1a7-4ae9-a841-7923ca08cb9d=testing-taint-value-c37142dd-f8a6-4d64-8c58-e89ee0dd2300:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:42:17.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-4917" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:81.625 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:308 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":12,"completed":6,"skipped":2869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:42:17.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Aug 28 02:42:17.148: INFO: Waiting up to 1m0s for all nodes to be ready Aug 28 02:43:17.197: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:43:17.199: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 02:43:17.217: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 28 02:43:17.217: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 28 02:43:17.217: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 02:43:17.217: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 Aug 28 02:43:17.233: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:43:17.233: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:43:17.233: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:43:17.233: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:43:17.234: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:43:17.250: INFO: Waiting for running... Aug 28 02:43:22.313: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:43:27.380: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Aug 28 02:43:27.380: INFO: Node: node1, totalRequestedMemResource: 1250829250560, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:43:27.380: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Pod for on the node: bbdf6c53-7e4d-4cc2-b91a-8465010c8036-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:43:27.380: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Aug 28 02:43:27.381: INFO: Node: node2, totalRequestedMemResource: 1161491793920, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8901 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8901, will wait for the garbage collector to delete the pods Aug 28 02:43:33.672: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 6.11595ms Aug 28 02:43:34.373: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 700.616265ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:43:40.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8901" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:83.676 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:244 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":12,"completed":7,"skipped":3354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:43:40.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:43:40.827: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:43:40.835: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:43:40.837: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:43:40.846: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:43:40.846: INFO: Container discover ready: false, restart count 0 Aug 28 02:43:40.846: INFO: Container init ready: false, restart count 0 Aug 28 02:43:40.846: INFO: Container install ready: false, restart count 0 Aug 28 02:43:40.846: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:43:40.846: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:43:40.846: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:43:40.846: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:43:40.846: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:43:40.846: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:43:40.846: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:43:40.846: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:43:40.846: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:43:40.846: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:43:40.846: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:43:40.846: INFO: Container collectd ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:43:40.846: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:43:40.846: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:43:40.846: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:43:40.846: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container grafana ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:43:40.846: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:43:40.846: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:43:40.852: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:43:40.852: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:43:40.852: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:43:40.853: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:43:40.853: INFO: Container discover ready: false, restart count 0 Aug 28 02:43:40.853: INFO: Container init ready: false, restart count 0 Aug 28 02:43:40.853: INFO: Container install ready: false, restart count 0 Aug 28 02:43:40.853: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:43:40.853: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:43:40.853: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:43:40.853: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:43:40.853: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:43:40.853: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:43:40.853: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:43:40.853: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:43:40.853: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:43:40.853: INFO: Container collectd ready: true, restart count 0 Aug 28 02:43:40.853: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:43:40.853: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:43:40.853: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:43:40.853: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:43:40.853: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:43:40.853: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:43:40.853: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:43:40.853: INFO: Container tas-extender ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f655627e-53a4-4932-a76d-50f4e69a8bc9 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f655627e-53a4-4932-a76d-50f4e69a8bc9 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-f655627e-53a4-4932-a76d-50f4e69a8bc9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:43:48.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3795" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.131 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":12,"completed":8,"skipped":3444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:43:48.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:141 Aug 28 02:43:48.966: INFO: Waiting up to 1m0s for all nodes to be ready Aug 28 02:44:49.018: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:44:49.020: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 28 02:44:49.039: INFO: The status of Pod cmk-init-discover-node1-spg26 is Succeeded, skipping waiting Aug 28 02:44:49.039: INFO: The status of Pod cmk-init-discover-node2-l9qjd is Succeeded, skipping waiting Aug 28 02:44:49.039: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 28 02:44:49.039: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:350 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 Aug 28 02:44:57.129: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.129: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:44:57.130: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:44:57.130: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Pod for on the node: tas-telemetry-aware-scheduling-575ccbc9d4-v99df, Cpu: 200, Mem: 419430400 Aug 28 02:44:57.130: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Aug 28 02:44:57.130: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Aug 28 02:44:57.140: INFO: Waiting for running... Aug 28 02:45:02.202: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:45:07.270: INFO: ComputeCPUMemFraction for node: node1 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Node: node1, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Aug 28 02:45:07.270: INFO: Node: node1, totalRequestedMemResource: 1250829250560, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Aug 28 02:45:07.270: INFO: ComputeCPUMemFraction for node: node2 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Pod for on the node: 986472bf-7c6a-4cdf-9035-bfae59c335b9-0, Cpu: 38400, Mem: 89337456640 Aug 28 02:45:07.270: INFO: Node: node2, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Aug 28 02:45:07.270: INFO: Node: node2, totalRequestedMemResource: 1161491793920, memAllocatableVal: 178884628480, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node1" STEP: Verifying if the test-pod lands on node "node2" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:358 STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:45:27.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8084" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:138 • [SLOW TEST:98.405 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:346 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:364 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":12,"completed":9,"skipped":4182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:45:27.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:45:27.385: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:45:27.392: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:45:27.394: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:45:27.402: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:45:27.402: INFO: Container discover ready: false, restart count 0 Aug 28 02:45:27.402: INFO: Container init ready: false, restart count 0 Aug 28 02:45:27.402: INFO: Container install ready: false, restart count 0 Aug 28 02:45:27.402: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:45:27.402: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:45:27.402: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:45:27.402: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:45:27.402: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:45:27.402: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:45:27.402: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:45:27.402: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:45:27.402: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:45:27.402: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:45:27.402: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:45:27.402: INFO: Container collectd ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:45:27.402: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:45:27.402: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:45:27.402: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:45:27.402: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container grafana ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:45:27.402: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:45:27.402: INFO: rs-e2e-pts-score-j9sg6 from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container e2e-pts-score ready: true, restart count 0 Aug 28 02:45:27.402: INFO: rs-e2e-pts-score-q64cm from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container e2e-pts-score ready: true, restart count 0 Aug 28 02:45:27.402: INFO: rs-e2e-pts-score-sp87p from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container e2e-pts-score ready: true, restart count 0 Aug 28 02:45:27.402: INFO: rs-e2e-pts-score-x2h75 from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.402: INFO: Container e2e-pts-score ready: true, restart count 0 Aug 28 02:45:27.402: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:45:27.412: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:45:27.412: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:45:27.412: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:45:27.412: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:45:27.412: INFO: Container discover ready: false, restart count 0 Aug 28 02:45:27.412: INFO: Container init ready: false, restart count 0 Aug 28 02:45:27.412: INFO: Container install ready: false, restart count 0 Aug 28 02:45:27.412: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:45:27.412: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:45:27.412: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:45:27.412: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:45:27.412: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:45:27.412: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:45:27.412: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:45:27.412: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:45:27.412: INFO: Container collectd ready: true, restart count 0 Aug 28 02:45:27.412: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:45:27.412: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:45:27.412: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:45:27.412: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:45:27.412: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:45:27.412: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:45:27.412: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:45:27.412: INFO: Container tas-extender ready: true, restart count 0 Aug 28 02:45:27.412: INFO: test-pod from sched-priority-8084 started at 2021-08-28 02:45:15 +0000 UTC (1 container statuses recorded) Aug 28 02:45:27.412: INFO: Container test-pod ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:45:43.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2469" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.169 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":12,"completed":10,"skipped":4987,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:45:43.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:45:43.555: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:45:43.563: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:45:43.565: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:45:43.575: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:45:43.575: INFO: Container discover ready: false, restart count 0 Aug 28 02:45:43.575: INFO: Container init ready: false, restart count 0 Aug 28 02:45:43.575: INFO: Container install ready: false, restart count 0 Aug 28 02:45:43.575: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:45:43.575: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:45:43.575: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:45:43.575: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:45:43.575: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:45:43.575: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:45:43.575: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:45:43.575: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:45:43.575: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:45:43.575: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:45:43.575: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:45:43.575: INFO: Container collectd ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:45:43.575: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:45:43.575: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:45:43.575: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:45:43.575: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container grafana ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:45:43.575: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:45:43.575: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:45:43.575: INFO: rs-e2e-pts-filter-dw75j from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container e2e-pts-filter ready: true, restart count 0 Aug 28 02:45:43.575: INFO: rs-e2e-pts-filter-q25qt from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container e2e-pts-filter ready: true, restart count 0 Aug 28 02:45:43.575: INFO: rs-e2e-pts-score-j9sg6 from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container e2e-pts-score ready: false, restart count 0 Aug 28 02:45:43.575: INFO: rs-e2e-pts-score-q64cm from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container e2e-pts-score ready: false, restart count 0 Aug 28 02:45:43.575: INFO: rs-e2e-pts-score-sp87p from sched-priority-8084 started at 2021-08-28 02:45:07 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.575: INFO: Container e2e-pts-score ready: false, restart count 0 Aug 28 02:45:43.575: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:45:43.590: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:45:43.590: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:45:43.590: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:45:43.590: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:45:43.591: INFO: Container discover ready: false, restart count 0 Aug 28 02:45:43.591: INFO: Container init ready: false, restart count 0 Aug 28 02:45:43.591: INFO: Container install ready: false, restart count 0 Aug 28 02:45:43.591: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:45:43.591: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:45:43.591: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:45:43.591: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:45:43.591: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:45:43.591: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:45:43.591: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:45:43.591: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:45:43.591: INFO: Container collectd ready: true, restart count 0 Aug 28 02:45:43.591: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:45:43.591: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:45:43.591: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:45:43.591: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:45:43.591: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:45:43.591: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:45:43.591: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:45:43.591: INFO: Container tas-extender ready: true, restart count 0 Aug 28 02:45:43.591: INFO: rs-e2e-pts-filter-fpbpf from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container e2e-pts-filter ready: true, restart count 0 Aug 28 02:45:43.591: INFO: rs-e2e-pts-filter-zmwpp from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container e2e-pts-filter ready: true, restart count 0 Aug 28 02:45:43.591: INFO: test-pod from sched-priority-8084 started at 2021-08-28 02:45:15 +0000 UTC (1 container statuses recorded) Aug 28 02:45:43.591: INFO: Container test-pod ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585d40386a35], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] STEP: Considering event: Type = [Normal], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585de05eecab], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2907/filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585e3ba6d9a9], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.127/24]] STEP: Considering event: Type = [Normal], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585e3c6d5a2f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585e59d44cc3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 493.278552ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585e5f9da396], Reason = [Created], Message = [Created container filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf] STEP: Considering event: Type = [Normal], Name = [filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf.169f585e65a755a5], Reason = [Started], Message = [Started container filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf] STEP: Considering event: Type = [Normal], Name = [without-label.169f585c4f86a965], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2907/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.169f585cabcc396a], Reason = [AddedInterface], Message = [Add eth0 [10.244.4.126/24]] STEP: Considering event: Type = [Normal], Name = [without-label.169f585cac800ade], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.2"] STEP: Considering event: Type = [Normal], Name = [without-label.169f585cc88d4ac7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.2" in 470.621016ms] STEP: Considering event: Type = [Normal], Name = [without-label.169f585cce173d94], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.169f585cd4597d5d], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.169f585d3f39473e], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [without-label.169f585d40d9d5ad], Reason = [FailedMount], Message = [MountVolume.SetUp failed for volume "default-token-ncn6j" : object "sched-pred-2907"/"default-token-ncn6j" not registered] STEP: Considering event: Type = [Warning], Name = [additional-pod64faa063-837a-4b1f-a5c0-af9ac8ca7392.169f585ea707fd39], Reason = [FailedScheduling], Message = [0/5 nodes are available: 5 Insufficient example.com/beardsecond.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:45:54.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2907" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:11.174 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":12,"completed":11,"skipped":5096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 28 02:45:54.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 28 02:45:54.735: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 28 02:45:54.743: INFO: Waiting for terminating namespaces to be deleted... Aug 28 02:45:54.747: INFO: Logging pods the apiserver thinks is on node node1 before test Aug 28 02:45:54.755: INFO: cmk-init-discover-node1-spg26 from kube-system started at 2021-08-27 20:57:37 +0000 UTC (3 container statuses recorded) Aug 28 02:45:54.755: INFO: Container discover ready: false, restart count 0 Aug 28 02:45:54.755: INFO: Container init ready: false, restart count 0 Aug 28 02:45:54.755: INFO: Container install ready: false, restart count 0 Aug 28 02:45:54.755: INFO: cmk-jw4m6 from kube-system started at 2021-08-27 20:58:19 +0000 UTC (2 container statuses recorded) Aug 28 02:45:54.755: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:45:54.755: INFO: kube-flannel-ssxn7 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kube-flannel ready: true, restart count 1 Aug 28 02:45:54.755: INFO: kube-multus-ds-amd64-nn7bl from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:45:54.755: INFO: kube-proxy-pb5bl from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:45:54.755: INFO: kubernetes-dashboard-86c6f9df5b-c56fg from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kubernetes-dashboard ready: true, restart count 1 Aug 28 02:45:54.755: INFO: kubernetes-metrics-scraper-678c97765c-gtp5x from kube-system started at 2021-08-27 20:49:21 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Aug 28 02:45:54.755: INFO: nginx-proxy-node1 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container nginx-proxy ready: true, restart count 2 Aug 28 02:45:54.755: INFO: node-feature-discovery-worker-bd9kg from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:45:54.755: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-9lndx from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:45:54.755: INFO: collectd-ccvwg from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:45:54.755: INFO: Container collectd ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:45:54.755: INFO: node-exporter-4cvlq from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:45:54.755: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:45:54.755: INFO: prometheus-k8s-0 from monitoring started at 2021-08-27 20:59:29 +0000 UTC (5 container statuses recorded) Aug 28 02:45:54.755: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container grafana ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container prometheus ready: true, restart count 1 Aug 28 02:45:54.755: INFO: Container prometheus-config-reloader ready: true, restart count 0 Aug 28 02:45:54.755: INFO: Container rules-configmap-reloader ready: true, restart count 0 Aug 28 02:45:54.755: INFO: rs-e2e-pts-filter-dw75j from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container e2e-pts-filter ready: false, restart count 0 Aug 28 02:45:54.755: INFO: rs-e2e-pts-filter-q25qt from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.755: INFO: Container e2e-pts-filter ready: false, restart count 0 Aug 28 02:45:54.755: INFO: Logging pods the apiserver thinks is on node node2 before test Aug 28 02:45:54.764: INFO: cmk-fzjgr from kube-system started at 2021-08-27 20:58:20 +0000 UTC (2 container statuses recorded) Aug 28 02:45:54.764: INFO: Container nodereport ready: true, restart count 0 Aug 28 02:45:54.764: INFO: Container reconcile ready: true, restart count 0 Aug 28 02:45:54.764: INFO: cmk-init-discover-node2-l9qjd from kube-system started at 2021-08-27 20:57:57 +0000 UTC (3 container statuses recorded) Aug 28 02:45:54.764: INFO: Container discover ready: false, restart count 0 Aug 28 02:45:54.764: INFO: Container init ready: false, restart count 0 Aug 28 02:45:54.764: INFO: Container install ready: false, restart count 0 Aug 28 02:45:54.764: INFO: cmk-webhook-6c9d5f8578-ndbx2 from kube-system started at 2021-08-27 20:58:20 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container cmk-webhook ready: true, restart count 0 Aug 28 02:45:54.764: INFO: kube-flannel-t9qv4 from kube-system started at 2021-08-27 20:48:48 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container kube-flannel ready: true, restart count 2 Aug 28 02:45:54.764: INFO: kube-multus-ds-amd64-tfffk from kube-system started at 2021-08-27 20:48:56 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container kube-multus ready: true, restart count 1 Aug 28 02:45:54.764: INFO: kube-proxy-r4q4t from kube-system started at 2021-08-27 20:48:12 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container kube-proxy ready: true, restart count 2 Aug 28 02:45:54.764: INFO: nginx-proxy-node2 from kube-system started at 2021-08-27 20:54:17 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container nginx-proxy ready: true, restart count 1 Aug 28 02:45:54.764: INFO: node-feature-discovery-worker-54lfh from kube-system started at 2021-08-27 20:55:06 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container nfd-worker ready: true, restart count 0 Aug 28 02:45:54.764: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-4f962 from kube-system started at 2021-08-27 20:55:51 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container kube-sriovdp ready: true, restart count 0 Aug 28 02:45:54.764: INFO: collectd-64dp2 from monitoring started at 2021-08-27 21:04:15 +0000 UTC (3 container statuses recorded) Aug 28 02:45:54.764: INFO: Container collectd ready: true, restart count 0 Aug 28 02:45:54.764: INFO: Container collectd-exporter ready: true, restart count 0 Aug 28 02:45:54.764: INFO: Container rbac-proxy ready: true, restart count 0 Aug 28 02:45:54.764: INFO: node-exporter-p6h5h from monitoring started at 2021-08-27 20:59:13 +0000 UTC (2 container statuses recorded) Aug 28 02:45:54.764: INFO: Container kube-rbac-proxy ready: true, restart count 0 Aug 28 02:45:54.764: INFO: Container node-exporter ready: true, restart count 0 Aug 28 02:45:54.764: INFO: tas-telemetry-aware-scheduling-575ccbc9d4-v99df from monitoring started at 2021-08-27 21:02:08 +0000 UTC (2 container statuses recorded) Aug 28 02:45:54.764: INFO: Container tas-controller ready: true, restart count 0 Aug 28 02:45:54.764: INFO: Container tas-extender ready: true, restart count 0 Aug 28 02:45:54.764: INFO: rs-e2e-pts-filter-fpbpf from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container e2e-pts-filter ready: false, restart count 0 Aug 28 02:45:54.764: INFO: rs-e2e-pts-filter-zmwpp from sched-pred-2469 started at 2021-08-28 02:45:37 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container e2e-pts-filter ready: false, restart count 0 Aug 28 02:45:54.764: INFO: filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf from sched-pred-2907 started at 2021-08-28 02:45:50 +0000 UTC (1 container statuses recorded) Aug 28 02:45:54.764: INFO: Container filler-pod-a8fe94db-ae11-4500-a31c-dec807cf68cf ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-f99d4fd2-3198-4e52-a9ce-3f48006f09ff=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-89609490-9866-4b61-88d5-a0cbdad33ca3 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-89609490-9866-4b61-88d5-a0cbdad33ca3 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-89609490-9866-4b61-88d5-a0cbdad33ca3 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-f99d4fd2-3198-4e52-a9ce-3f48006f09ff=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 28 02:46:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3900" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.159 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":12,"completed":12,"skipped":5454,"failed":0} SSSSSSSSSSSSSSSSSSAug 28 02:46:02.875: INFO: Running AfterSuite actions on all nodes Aug 28 02:46:02.875: INFO: Running AfterSuite actions on node 1 Aug 28 02:46:02.875: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":12,"completed":12,"skipped":5472,"failed":0} Ran 12 of 5484 Specs in 523.947 seconds SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 5472 Skipped PASS Ginkgo ran 1 suite in 8m45.156247892s Test Suite Passed