I0610 23:52:58.391385 23 e2e.go:129] Starting e2e run "d0a51e44-08fe-425b-86c0-e7061ced9bc9" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1654905177 - Will randomize all specs Will run 13 of 5773 specs Jun 10 23:52:58.406: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:52:58.411: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 10 23:52:58.439: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:52:58.505: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:52:58.505: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:52:58.505: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:52:58.505: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:52:58.505: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 10 23:52:58.523: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Jun 10 23:52:58.523: INFO: e2e test version: v1.21.9 Jun 10 23:52:58.524: INFO: kube-apiserver version: v1.21.1 Jun 10 23:52:58.524: INFO: >>> kubeConfig: /root/.kube/config Jun 10 23:52:58.531: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:52:58.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred W0610 23:52:58.565961 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jun 10 23:52:58.566: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Jun 10 23:52:58.570: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:52:58.572: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:52:58.588: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:52:58.592: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:52:58.616: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:52:58.616: INFO: Container discover ready: false, restart count 0 Jun 10 23:52:58.616: INFO: Container init ready: false, restart count 0 Jun 10 23:52:58.616: INFO: Container install ready: false, restart count 0 Jun 10 23:52:58.616: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:52:58.616: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:52:58.616: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:52:58.616: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:52:58.616: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:52:58.616: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:52:58.616: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:52:58.616: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:52:58.616: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:52:58.616: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:52:58.616: INFO: Container collectd ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:52:58.616: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:52:58.616: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:52:58.616: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:52:58.616: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container grafana ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:52:58.616: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.616: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:52:58.616: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:52:58.633: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:52:58.633: INFO: Container discover ready: false, restart count 0 Jun 10 23:52:58.633: INFO: Container init ready: false, restart count 0 Jun 10 23:52:58.633: INFO: Container install ready: false, restart count 0 Jun 10 23:52:58.633: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:52:58.633: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:52:58.633: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:52:58.633: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:52:58.633: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:52:58.633: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:52:58.633: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:52:58.633: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:52:58.633: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:52:58.633: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:52:58.633: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:52:58.633: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:52:58.633: INFO: Container collectd ready: true, restart count 0 Jun 10 23:52:58.633: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:52:58.633: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:52:58.633: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:52:58.633: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:52:58.633: INFO: Container node-exporter ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-bef32626-ab7f-4e41-9784-9237d6e87bd1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-bef32626-ab7f-4e41-9784-9237d6e87bd1 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-bef32626-ab7f-4e41-9784-9237d6e87bd1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:53:14.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8745" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.229 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":1,"skipped":366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:53:14.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 10 23:53:14.805: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 23:54:14.861: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:54:14.864: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:54:14.882: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:54:14.882: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:54:14.882: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:54:14.882: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:54:14.898: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:54:14.898: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.898: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:54:14.898: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.898: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:54:14.898: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:54:14.898: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.898: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:54:14.898: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.898: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.898: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.899: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:54:14.899: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:54:14.899: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.899: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:54:14.899: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:54:14.899: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:54:14.899: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.899: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:54:14.899: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:54:14.899: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:54:14.899: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.899: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:54:14.899: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.899: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:54:14.899: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.899: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.899: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.899: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:54:14.899: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:54:14.899: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Jun 10 23:54:14.917: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:54:14.917: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.917: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:54:14.917: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:54:14.917: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:54:14.917: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:54:14.917: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.917: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:54:14.917: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:54:14.917: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:54:14.917: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:54:14.917: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.917: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:54:14.917: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:54:14.917: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:54:14.917: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:54:14.917: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:54:14.917: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:54:14.917: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:54:14.917: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:54:14.917: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:54:14.917: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 10 23:54:14.931: INFO: Waiting for running... Jun 10 23:54:14.935: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:54:20.006: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:54:20.006: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:54:20.006: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:54:20.006: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:54:20.006: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:54:20.006: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:54:20.006: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:54:20.006: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:54:20.006: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: 92fa0a65-153e-47c3-b5da-dbcf5d524cd0-0, Cpu: 37613, Mem: 87744079872 Jun 10 23:54:20.006: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 10 23:54:20.006: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:54:20.006: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:54:20.006: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:54:20.006: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:54:20.006: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:54:20.006: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:54:20.006: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:54:20.006: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:54:20.006: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:54:20.006: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:54:20.006: INFO: Pod for on the node: 06f719b8-1385-422d-980c-0fb6c80ceea3-0, Cpu: 37963, Mem: 88885940224 Jun 10 23:54:20.006: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 10 23:54:20.006: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-b653202c-82be-45fb-87e4=testing-taint-value-6fb8c81d-f5c4-4283-b681-561859eb5387:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-74ad9c86-cf69-48c4-b0d7=testing-taint-value-69fed161-b5ad-480e-9ea3-7dad2a6edb1e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0abc5e56-6757-43aa-b5bf=testing-taint-value-6c1ca2d3-132b-4d8c-821e-a0b4ba7a96e5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-56e9070b-1dff-46ad-b414=testing-taint-value-a3e0096a-9f61-4444-ad61-5b354fb3f6ab:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-3a55e855-5456-48cf-ac87=testing-taint-value-a9c2ec54-065b-47ef-8a5c-4d13a30700b2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-31f35d44-ffe4-433d-9d27=testing-taint-value-496e1091-6c02-479b-aeab-eb483f201eb6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-f6d1e248-beba-47d8-b073=testing-taint-value-b3aa649c-3547-4ae7-97aa-61006117e007:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-1d3d9078-f398-4a16-9cc7=testing-taint-value-4e807c7a-d01a-4fe8-b0f1-60566437af14:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5951eb21-ee81-46c6-8f3e=testing-taint-value-4dde134e-1e4a-48b4-8201-7f84725bca4b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bef07574-9b29-4722-aec1=testing-taint-value-c75fb27d-bdbc-4572-8667-1a5692234d43:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-94c828ac-473f-4985-9162=testing-taint-value-71fa2def-8774-4f34-9faf-62cf129d6525:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-88ac43e6-6bd7-4049-96bf=testing-taint-value-14c7313d-0d65-4a2d-b863-882f331c53cb:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-34f5f09e-03dd-4257-b395=testing-taint-value-8aa5841e-41c8-4017-8714-7d0d85bd449a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a1896d8c-ecdc-435e-9d8b=testing-taint-value-9cbddabf-c4fe-42ba-bdbc-953c7b455ce7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-60eaf24b-5c81-4b40-8bca=testing-taint-value-03ce78ee-1080-4923-a35a-8c28594ba63e:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-acb1cc2c-6767-44f8-9394=testing-taint-value-65538517-04d0-4bc8-b3db-7880c76c90d1:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-51dc072a-cf28-4557-a16a=testing-taint-value-df035780-9453-4611-8e4f-91aad985e876:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-97313df4-4dea-408c-a361=testing-taint-value-a1e6178c-6587-498e-8f95-d48a084b5414:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a90ea0a7-6b6e-4f18-8c9e=testing-taint-value-4f1162a4-df4c-4493-81c7-8a83a50b4d60:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2702d8e0-38a3-4b63-9d2d=testing-taint-value-ca5787ee-c182-463d-b641-bfa816016b56:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-94c828ac-473f-4985-9162=testing-taint-value-71fa2def-8774-4f34-9faf-62cf129d6525:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-88ac43e6-6bd7-4049-96bf=testing-taint-value-14c7313d-0d65-4a2d-b863-882f331c53cb:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-34f5f09e-03dd-4257-b395=testing-taint-value-8aa5841e-41c8-4017-8714-7d0d85bd449a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a1896d8c-ecdc-435e-9d8b=testing-taint-value-9cbddabf-c4fe-42ba-bdbc-953c7b455ce7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-60eaf24b-5c81-4b40-8bca=testing-taint-value-03ce78ee-1080-4923-a35a-8c28594ba63e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-acb1cc2c-6767-44f8-9394=testing-taint-value-65538517-04d0-4bc8-b3db-7880c76c90d1:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-51dc072a-cf28-4557-a16a=testing-taint-value-df035780-9453-4611-8e4f-91aad985e876:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-97313df4-4dea-408c-a361=testing-taint-value-a1e6178c-6587-498e-8f95-d48a084b5414:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a90ea0a7-6b6e-4f18-8c9e=testing-taint-value-4f1162a4-df4c-4493-81c7-8a83a50b4d60:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2702d8e0-38a3-4b63-9d2d=testing-taint-value-ca5787ee-c182-463d-b641-bfa816016b56:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-b653202c-82be-45fb-87e4=testing-taint-value-6fb8c81d-f5c4-4283-b681-561859eb5387:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-74ad9c86-cf69-48c4-b0d7=testing-taint-value-69fed161-b5ad-480e-9ea3-7dad2a6edb1e:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0abc5e56-6757-43aa-b5bf=testing-taint-value-6c1ca2d3-132b-4d8c-821e-a0b4ba7a96e5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-56e9070b-1dff-46ad-b414=testing-taint-value-a3e0096a-9f61-4444-ad61-5b354fb3f6ab:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-3a55e855-5456-48cf-ac87=testing-taint-value-a9c2ec54-065b-47ef-8a5c-4d13a30700b2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-31f35d44-ffe4-433d-9d27=testing-taint-value-496e1091-6c02-479b-aeab-eb483f201eb6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-f6d1e248-beba-47d8-b073=testing-taint-value-b3aa649c-3547-4ae7-97aa-61006117e007:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-1d3d9078-f398-4a16-9cc7=testing-taint-value-4e807c7a-d01a-4fe8-b0f1-60566437af14:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5951eb21-ee81-46c6-8f3e=testing-taint-value-4dde134e-1e4a-48b4-8201-7f84725bca4b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bef07574-9b29-4722-aec1=testing-taint-value-c75fb27d-bdbc-4572-8667-1a5692234d43:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:54:27.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6560" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:72.584 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":2,"skipped":1607,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:54:27.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:54:27.393: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:54:27.400: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:54:27.402: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:54:27.419: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:54:27.419: INFO: Container discover ready: false, restart count 0 Jun 10 23:54:27.419: INFO: Container init ready: false, restart count 0 Jun 10 23:54:27.419: INFO: Container install ready: false, restart count 0 Jun 10 23:54:27.419: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:54:27.419: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:54:27.419: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:54:27.419: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:54:27.419: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:54:27.419: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:54:27.419: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:54:27.419: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:54:27.419: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:54:27.419: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:54:27.419: INFO: Container collectd ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:54:27.419: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:54:27.419: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:54:27.419: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:54:27.419: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container grafana ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:54:27.419: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:54:27.419: INFO: with-tolerations from sched-priority-6560 started at 2022-06-10 23:54:20 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.419: INFO: Container with-tolerations ready: true, restart count 0 Jun 10 23:54:27.419: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:54:27.429: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:54:27.429: INFO: Container discover ready: false, restart count 0 Jun 10 23:54:27.429: INFO: Container init ready: false, restart count 0 Jun 10 23:54:27.429: INFO: Container install ready: false, restart count 0 Jun 10 23:54:27.429: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:54:27.429: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:54:27.429: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:54:27.429: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:54:27.429: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:54:27.429: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:54:27.429: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:54:27.429: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:54:27.429: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:54:27.429: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:54:27.429: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:54:27.429: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:54:27.429: INFO: Container collectd ready: true, restart count 0 Jun 10 23:54:27.429: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:54:27.429: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:54:27.429: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:54:27.429: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:54:27.429: INFO: Container node-exporter ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-87497fc2-f83d-4645-bf7f-b2581e3e22ee=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-20f7f520-3cb4-40f3-8e06-68f7e55ba0a8 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767929b397064], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4677/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767932217046a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f7679334dc1f9b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 314.899804ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767933b9ff9e0], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f7679342ce5871], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767938a4eb004], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f767938ce85194], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-87497fc2-f83d-4645-bf7f-b2581e3e22ee: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16f767938ce85194], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-87497fc2-f83d-4645-bf7f-b2581e3e22ee: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767929b397064], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4677/without-toleration to node2] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767932217046a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f7679334dc1f9b], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 314.899804ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767933b9ff9e0], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f7679342ce5871], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16f767938a4eb004], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-87497fc2-f83d-4645-bf7f-b2581e3e22ee=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16f76793e7bd3e6c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4677/still-no-tolerations to node2] STEP: removing the label kubernetes.io/e2e-label-key-20f7f520-3cb4-40f3-8e06-68f7e55ba0a8 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-20f7f520-3cb4-40f3-8e06-68f7e55ba0a8 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-87497fc2-f83d-4645-bf7f-b2581e3e22ee=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:54:33.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4677" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.185 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":3,"skipped":2067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:54:33.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:54:33.583: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:54:33.591: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:54:33.593: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:54:33.601: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:54:33.601: INFO: Container discover ready: false, restart count 0 Jun 10 23:54:33.601: INFO: Container init ready: false, restart count 0 Jun 10 23:54:33.601: INFO: Container install ready: false, restart count 0 Jun 10 23:54:33.601: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:54:33.601: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:54:33.601: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:54:33.601: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:54:33.601: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:54:33.601: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:54:33.601: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:54:33.601: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:54:33.601: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:54:33.601: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:54:33.601: INFO: Container collectd ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:54:33.601: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:54:33.601: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:54:33.601: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:54:33.601: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container grafana ready: true, restart count 0 Jun 10 23:54:33.601: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:54:33.601: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:54:33.601: INFO: with-tolerations from sched-priority-6560 started at 2022-06-10 23:54:20 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.601: INFO: Container with-tolerations ready: false, restart count 0 Jun 10 23:54:33.601: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:54:33.611: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:54:33.611: INFO: Container discover ready: false, restart count 0 Jun 10 23:54:33.611: INFO: Container init ready: false, restart count 0 Jun 10 23:54:33.611: INFO: Container install ready: false, restart count 0 Jun 10 23:54:33.611: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:54:33.611: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:54:33.611: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:54:33.611: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:54:33.611: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:54:33.611: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:54:33.611: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:54:33.611: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:54:33.611: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:54:33.611: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:54:33.611: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:54:33.611: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:54:33.611: INFO: Container collectd ready: true, restart count 0 Jun 10 23:54:33.611: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:54:33.611: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:54:33.611: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:54:33.611: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:54:33.611: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:54:33.611: INFO: still-no-tolerations from sched-pred-4677 started at 2022-06-10 23:54:33 +0000 UTC (1 container statuses recorded) Jun 10 23:54:33.611: INFO: Container still-no-tolerations ready: false, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16f767940ca2c941], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:54:34.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1277" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":4,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:54:34.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 10 23:54:34.682: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 23:55:34.735: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:55:34.737: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:55:34.756: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:55:34.756: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:55:34.756: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:55:34.756: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:55:34.771: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:55:34.771: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.771: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:55:34.771: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:55:34.771: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:55:34.771: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:55:34.771: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.771: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:55:34.771: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:55:34.771: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:55:34.771: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:55:34.771: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.771: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:55:34.771: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:55:34.771: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:55:34.771: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:55:34.771: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:55:34.771: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.771: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.771: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:55:34.771: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:55:34.771: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Jun 10 23:55:34.788: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:55:34.788: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.788: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:55:34.788: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.788: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:55:34.788: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:55:34.788: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.788: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:55:34.788: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.788: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.789: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.789: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:55:34.789: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:55:34.789: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.789: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:55:34.789: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:55:34.789: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:55:34.789: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.789: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:55:34.789: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:55:34.789: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:55:34.789: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.789: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:55:34.789: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.789: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:55:34.789: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.789: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:55:34.789: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:55:34.789: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:55:34.789: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:55:34.789: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 10 23:55:34.803: INFO: Waiting for running... Jun 10 23:55:34.805: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:55:39.877: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:55:39.877: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:55:39.877: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:55:39.877: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.877: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:55:39.877: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:55:39.877: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.877: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:55:39.877: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.877: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.877: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:55:39.877: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:55:39.877: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:55:39.877: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.878: INFO: Pod for on the node: 38988584-c8a4-4591-809e-187daf6b350a-0, Cpu: 37613, Mem: 87744079872 Jun 10 23:55:39.878: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 10 23:55:39.878: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:55:39.878: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:55:39.878: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:55:39.878: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:55:39.878: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:55:39.878: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:55:39.878: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.878: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:55:39.878: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.878: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:55:39.878: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.878: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:55:39.878: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:55:39.878: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:55:39.878: INFO: Pod for on the node: 2a5f483b-cd26-40f0-a87e-d50da6d05c89-0, Cpu: 37963, Mem: 88885940224 Jun 10 23:55:39.878: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 10 23:55:39.878: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8487 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-8487, will wait for the garbage collector to delete the pods Jun 10 23:55:46.114: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.873451ms Jun 10 23:55:46.215: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 100.329923ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:55:57.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-8487" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:83.080 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":5,"skipped":2521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:55:57.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:55:57.778: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:55:57.786: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:55:57.798: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:55:57.806: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:55:57.806: INFO: Container discover ready: false, restart count 0 Jun 10 23:55:57.806: INFO: Container init ready: false, restart count 0 Jun 10 23:55:57.806: INFO: Container install ready: false, restart count 0 Jun 10 23:55:57.806: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:55:57.806: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:55:57.806: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:55:57.806: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:55:57.806: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:55:57.806: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:55:57.806: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:55:57.806: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:55:57.806: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:55:57.806: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:55:57.806: INFO: Container collectd ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:55:57.806: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:55:57.806: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:55:57.806: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:55:57.806: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container grafana ready: true, restart count 0 Jun 10 23:55:57.806: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:55:57.806: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.806: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:55:57.807: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:55:57.825: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:55:57.825: INFO: Container discover ready: false, restart count 0 Jun 10 23:55:57.825: INFO: Container init ready: false, restart count 0 Jun 10 23:55:57.825: INFO: Container install ready: false, restart count 0 Jun 10 23:55:57.825: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:55:57.825: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:55:57.825: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:55:57.825: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:55:57.825: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:55:57.825: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:55:57.825: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:55:57.825: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:55:57.825: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:55:57.825: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:55:57.825: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:55:57.825: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:55:57.825: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:55:57.825: INFO: Container collectd ready: true, restart count 0 Jun 10 23:55:57.825: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:55:57.825: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:55:57.825: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:55:57.826: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:55:57.826: INFO: Container node-exporter ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Jun 10 23:55:57.862: INFO: Pod cmk-qjrhs requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod cmk-webhook-6c9d5f8578-n9w8j requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod cmk-zpstc requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod kube-flannel-8jl6m requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod kube-flannel-x926c requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod kube-multus-ds-amd64-4gckf requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod kube-multus-ds-amd64-nj866 requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod kube-proxy-4clxz requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod kube-proxy-5bkrr requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod kubernetes-dashboard-785dcbb76d-7pmgn requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod kubernetes-metrics-scraper-5558854cb-pf6tn requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod node-feature-discovery-worker-9xsdt requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod node-feature-discovery-worker-s9mwk requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod collectd-kpj5z requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod collectd-srmjh requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod node-exporter-tk8f9 requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod node-exporter-trpg7 requesting local ephemeral resource =0 on Node node2 Jun 10 23:55:57.862: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-lb2mn requesting local ephemeral resource =0 on Node node1 Jun 10 23:55:57.862: INFO: Using pod capacity: 40608090249 Jun 10 23:55:57.862: INFO: Node: node1 has local ephemeral resource allocatable: 406080902496 Jun 10 23:55:57.862: INFO: Node: node2 has local ephemeral resource allocatable: 406080902496 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Jun 10 23:55:58.052: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f767a7a951229c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f767a8b373affc], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f767a8d2e2914a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 527.354053ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f767a8e1b51bbb], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16f767a950d74dfe], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f767a7a9cbe3e7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-1 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f767a9634c8aff], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f767a9a94fe43f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.174618549s] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f767a9c385398d], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16f767a9d4f48f4a], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f767a7aebff30a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f767a962b9bb9c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f767a98eb07bf5], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 737.585851ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f767a99c8852b1], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16f767a9d25c0abc], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f767a7af561c73], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f767a9d0c277af], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f767aa041b2d5d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 861.447525ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f767aa09cd62b4], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16f767aa10a34994], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f767a7afe293d1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-12 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f767a9d0bae7f7], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f767a9e1875f7c], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 281.830466ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f767a9e74d3e6d], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16f767a9ee1b26b9], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f767a7b068b121], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f767a9618f78e6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f767a97e0030c3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 477.142281ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f767a9986dbdb0], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16f767a9d268635d], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f767a7b0fc5050], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-14 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f767a95ed66660], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f767a9d566eadf], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.989175589s] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f767aa18621b5e], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16f767aa57aa3798], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f767a7b184855c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-15 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f767a9eea88758], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f767aa2c760fac], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.036873178s] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f767aa534b7563], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16f767aa5ddbe820], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f767a7b20ed243], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f767aa3b140cc3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f767aa5545ed44], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 439.462886ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f767aa5f74cab8], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16f767aa66a1de66], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f767a7b2bb36b7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f767aa31997081], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f767aa44162188], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 310.155739ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f767aa59643486], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16f767aa62305dcf], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f767a7b356d8ef], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f767aa547fcb57], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f767aa6551f42a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 282.20153ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f767aa6c8967dc], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16f767aa7378a3f0], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f767a7b3dfd942], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-19 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f767a9d0c00bd3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f767a9f23390df], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 561.214574ms] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f767a9f8c907e6], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16f767a9ffac7691], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f767a7aa635c0b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f767a938a10d86], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f767a96809dd3d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 795.389949ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f767a9e493a39c], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16f767aa27e95893], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f767a7aaef81f4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f767a9334b19fb], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f767a945dc6999], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 311.499041ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f767a963b45858], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16f767a9c27e4547], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f767a7ab7acf3d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-4 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f767a8dad0c88d], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f767a8f6a5fd49], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 466.951043ms] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f767a8fe323077], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16f767a9626d03de], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f767a7ac00a84b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f767a84270c265], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f767a8646b39c2], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 570.055035ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f767a8b472c22a], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16f767a902478011], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f767a7ac927593], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-6 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f767a9e2ae68c3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f767aa185d3651], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 900.640118ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f767aa39a70759], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16f767aa5bbf644c], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f767a7ad2620e0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-7 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f767a938887a8a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f767a956d67043], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 508.417319ms] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f767a960e81c8b], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16f767a9c2741ddb], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f767a7adb584c4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-8 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f767a95770b849], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f767a9c2897f90], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.796779091s] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f767a9e8fce2ad], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16f767aa3cd59007], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f767a7ae3e8c40], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4367/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f767a8dc1c209a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f767a906c2ff5d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 715.574332ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f767a9216e3ff4], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16f767a965be1a1b], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16f767ab3640b6d7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:56:14.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4367" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.394 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":6,"skipped":3734,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:56:14.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 10 23:56:14.176: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 23:57:14.233: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:57:14.235: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:57:14.253: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:57:14.253: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:57:14.253: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:57:14.253: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:57:14.271: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:57:14.271: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:57:14.271: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:57:14.271: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:57:14.271: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:57:14.271: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:57:14.271: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:57:14.271: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:57:14.271: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:57:14.271: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:57:14.271: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:57:14.271: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:57:14.271: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:57:14.271: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:57:14.271: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:57:14.271: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:57:14.271: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:57:14.271: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:57:14.271: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:57:14.271: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:57:14.271: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:57:14.271: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Jun 10 23:57:18.316: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:57:18.316: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:57:18.316: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:57:18.316: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:57:18.316: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:57:18.316: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:57:18.316: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:57:18.316: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:57:18.316: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:57:18.316: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:57:18.316: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:57:18.316: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:57:18.316: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:57:18.316: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:57:18.316: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:57:18.316: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:57:18.316: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:57:18.316: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:57:18.316: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:57:18.316: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 10 23:57:18.316: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:57:18.316: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 10 23:57:18.327: INFO: Waiting for running... Jun 10 23:57:18.331: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:57:23.410: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:57:23.410: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:57:23.410: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:57:23.410: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:57:23.410: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:57:23.410: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:57:23.410: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:57:23.410: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:57:23.410: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: 54fca284-8dc3-4e8f-832d-0791a31afae0-0, Cpu: 45313, Mem: 105632540672 Jun 10 23:57:23.410: INFO: Node: node1, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Jun 10 23:57:23.410: INFO: Node: node1, totalRequestedMemResource: 107343347712, memAllocatableVal: 178884608000, memFraction: 0.6000703409429167 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:57:23.410: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:57:23.410: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:57:23.410: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:57:23.410: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:57:23.410: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:57:23.410: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:57:23.410: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:57:23.410: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:57:23.410: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:57:23.410: INFO: Pod for on the node: 739769ce-2eea-41cd-abf9-370fdf15e549-0, Cpu: 45663, Mem: 106774400614 Jun 10 23:57:23.411: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Jun 10 23:57:23.411: INFO: Node: node2, totalRequestedCPUResource: 46200, cpuAllocatableMil: 77000, cpuFraction: 0.6 Jun 10 23:57:23.411: INFO: Node: node2, totalRequestedMemResource: 107343345254, memAllocatableVal: 178884603904, memFraction: 0.6000703409422913 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:57:31.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3491" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:77.320 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":7,"skipped":3773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:57:31.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:57:31.497: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:57:31.505: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:57:31.508: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:57:31.520: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:57:31.520: INFO: Container discover ready: false, restart count 0 Jun 10 23:57:31.520: INFO: Container init ready: false, restart count 0 Jun 10 23:57:31.520: INFO: Container install ready: false, restart count 0 Jun 10 23:57:31.520: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:57:31.520: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:57:31.520: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:57:31.520: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:57:31.520: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:57:31.520: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:57:31.520: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:57:31.520: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:57:31.520: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:57:31.520: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:57:31.520: INFO: Container collectd ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:57:31.520: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:57:31.520: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:57:31.520: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:57:31.520: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container grafana ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:57:31.520: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:57:31.520: INFO: pod-with-pod-antiaffinity from sched-priority-3491 started at 2022-06-10 23:57:23 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.520: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Jun 10 23:57:31.520: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:57:31.527: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:57:31.527: INFO: Container discover ready: false, restart count 0 Jun 10 23:57:31.527: INFO: Container init ready: false, restart count 0 Jun 10 23:57:31.527: INFO: Container install ready: false, restart count 0 Jun 10 23:57:31.527: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:57:31.527: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:57:31.527: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:57:31.527: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.527: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:57:31.527: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.527: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:57:31.527: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.527: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:57:31.527: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.527: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:57:31.528: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.528: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:57:31.528: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.528: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:57:31.528: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.528: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:57:31.528: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.528: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:57:31.528: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:57:31.528: INFO: Container collectd ready: true, restart count 0 Jun 10 23:57:31.528: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:57:31.528: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:57:31.528: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:57:31.528: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:57:31.528: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:57:31.528: INFO: pod-with-label-security-s1 from sched-priority-3491 started at 2022-06-10 23:57:14 +0000 UTC (1 container statuses recorded) Jun 10 23:57:31.528: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-718d9ba2-67e8-49d9-a582-3b251d4e706e 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-718d9ba2-67e8-49d9-a582-3b251d4e706e off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-718d9ba2-67e8-49d9-a582-3b251d4e706e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:57:39.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3211" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.157 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":8,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:57:39.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Jun 10 23:57:39.662: INFO: Waiting up to 1m0s for all nodes to be ready Jun 10 23:58:39.720: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:58:39.723: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 10 23:58:39.744: INFO: The status of Pod cmk-init-discover-node1-hlbt6 is Succeeded, skipping waiting Jun 10 23:58:39.744: INFO: The status of Pod cmk-init-discover-node2-jxvbr is Succeeded, skipping waiting Jun 10 23:58:39.744: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 10 23:58:39.744: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Jun 10 23:58:39.763: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:58:39.763: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:58:39.763: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:58:39.763: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:58:39.763: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:58:39.763: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:58:39.763: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:58:39.763: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:58:39.763: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:58:39.763: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:58:39.763: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:58:39.763: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:58:39.763: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:58:39.763: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:58:39.763: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:58:39.763: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:58:39.763: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:58:39.763: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:58:39.763: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:58:39.763: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:58:39.763: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:58:39.763: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Jun 10 23:58:47.853: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:58:47.853: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:58:47.853: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:58:47.853: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:58:47.853: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:58:47.853: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:58:47.853: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:58:47.853: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:58:47.853: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:58:47.853: INFO: Node: node2, totalRequestedCPUResource: 537, cpuAllocatableMil: 77000, cpuFraction: 0.006974025974025974 Jun 10 23:58:47.853: INFO: Node: node2, totalRequestedMemResource: 568944640, memAllocatableVal: 178884603904, memFraction: 0.003180512059636665 Jun 10 23:58:47.853: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:58:47.853: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:58:47.853: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:58:47.853: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:58:47.853: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:58:47.853: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:58:47.853: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:58:47.853: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:58:47.853: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:58:47.853: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:58:47.853: INFO: Node: node1, totalRequestedCPUResource: 887, cpuAllocatableMil: 77000, cpuFraction: 0.01151948051948052 Jun 10 23:58:47.853: INFO: Node: node1, totalRequestedMemResource: 1710807040, memAllocatableVal: 178884608000, memFraction: 0.009563746479518237 Jun 10 23:58:47.865: INFO: Waiting for running... Jun 10 23:58:47.869: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:58:52.936: INFO: ComputeCPUMemFraction for node: node2 Jun 10 23:58:52.936: INFO: Pod for on the node: cmk-init-discover-node2-jxvbr, Cpu: 300, Mem: 629145600 Jun 10 23:58:52.936: INFO: Pod for on the node: cmk-zpstc, Cpu: 200, Mem: 419430400 Jun 10 23:58:52.936: INFO: Pod for on the node: kube-flannel-8jl6m, Cpu: 150, Mem: 64000000 Jun 10 23:58:52.936: INFO: Pod for on the node: kube-multus-ds-amd64-nj866, Cpu: 100, Mem: 94371840 Jun 10 23:58:52.936: INFO: Pod for on the node: kube-proxy-4clxz, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: kubernetes-dashboard-785dcbb76d-7pmgn, Cpu: 50, Mem: 64000000 Jun 10 23:58:52.936: INFO: Pod for on the node: kubernetes-metrics-scraper-5558854cb-pf6tn, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: nginx-proxy-node2, Cpu: 25, Mem: 32000000 Jun 10 23:58:52.936: INFO: Pod for on the node: node-feature-discovery-worker-s9mwk, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: collectd-srmjh, Cpu: 300, Mem: 629145600 Jun 10 23:58:52.936: INFO: Pod for on the node: node-exporter-trpg7, Cpu: 112, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: 204bd597-c220-4c15-8dd0-dd0a5459f887-0, Cpu: 37963, Mem: 88885940224 Jun 10 23:58:52.936: INFO: Node: node2, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 10 23:58:52.936: INFO: Node: node2, totalRequestedMemResource: 89454884864, memAllocatableVal: 178884603904, memFraction: 0.5000703409445273 STEP: Compute Cpu, Mem Fraction after create balanced pods. Jun 10 23:58:52.936: INFO: ComputeCPUMemFraction for node: node1 Jun 10 23:58:52.936: INFO: Pod for on the node: cmk-init-discover-node1-hlbt6, Cpu: 300, Mem: 629145600 Jun 10 23:58:52.936: INFO: Pod for on the node: cmk-qjrhs, Cpu: 200, Mem: 419430400 Jun 10 23:58:52.936: INFO: Pod for on the node: cmk-webhook-6c9d5f8578-n9w8j, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: kube-flannel-x926c, Cpu: 150, Mem: 64000000 Jun 10 23:58:52.936: INFO: Pod for on the node: kube-multus-ds-amd64-4gckf, Cpu: 100, Mem: 94371840 Jun 10 23:58:52.936: INFO: Pod for on the node: kube-proxy-5bkrr, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: nginx-proxy-node1, Cpu: 25, Mem: 32000000 Jun 10 23:58:52.936: INFO: Pod for on the node: node-feature-discovery-worker-9xsdt, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.936: INFO: Pod for on the node: collectd-kpj5z, Cpu: 300, Mem: 629145600 Jun 10 23:58:52.937: INFO: Pod for on the node: node-exporter-tk8f9, Cpu: 112, Mem: 209715200 Jun 10 23:58:52.937: INFO: Pod for on the node: prometheus-k8s-0, Cpu: 400, Mem: 1205862400 Jun 10 23:58:52.937: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn, Cpu: 100, Mem: 209715200 Jun 10 23:58:52.937: INFO: Pod for on the node: 73dcc5ce-423b-43d1-be38-860aad086fd1-0, Cpu: 37613, Mem: 87744079872 Jun 10 23:58:52.937: INFO: Node: node1, totalRequestedCPUResource: 38500, cpuAllocatableMil: 77000, cpuFraction: 0.5 Jun 10 23:58:52.937: INFO: Node: node1, totalRequestedMemResource: 89454886912, memAllocatableVal: 178884608000, memFraction: 0.5000703409429167 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:59:09.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-3463" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:89.387 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":9,"skipped":4111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:59:09.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:59:09.050: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:59:09.058: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:59:09.069: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:59:09.084: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:59:09.084: INFO: Container discover ready: false, restart count 0 Jun 10 23:59:09.084: INFO: Container init ready: false, restart count 0 Jun 10 23:59:09.084: INFO: Container install ready: false, restart count 0 Jun 10 23:59:09.084: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:59:09.084: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:59:09.084: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:59:09.084: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.084: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:59:09.084: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.084: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:59:09.084: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.084: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:59:09.084: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.084: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:59:09.084: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.084: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:59:09.084: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.084: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:59:09.085: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.085: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:59:09.085: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:59:09.085: INFO: Container collectd ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:59:09.085: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:59:09.085: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:59:09.085: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:59:09.085: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Container grafana ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:59:09.085: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.085: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:59:09.085: INFO: test-pod from sched-priority-3463 started at 2022-06-10 23:58:58 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.085: INFO: Container test-pod ready: true, restart count 0 Jun 10 23:59:09.085: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:59:09.100: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:59:09.100: INFO: Container discover ready: false, restart count 0 Jun 10 23:59:09.100: INFO: Container init ready: false, restart count 0 Jun 10 23:59:09.100: INFO: Container install ready: false, restart count 0 Jun 10 23:59:09.100: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:59:09.100: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:59:09.100: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:59:09.100: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:59:09.100: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:59:09.100: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:59:09.100: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:59:09.100: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:59:09.100: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:59:09.100: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:59:09.100: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:59:09.100: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:59:09.100: INFO: Container collectd ready: true, restart count 0 Jun 10 23:59:09.100: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:59:09.100: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:59:09.100: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:59:09.100: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:59:09.100: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:59:09.100: INFO: rs-e2e-pts-score-6nv2p from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 10 23:59:09.100: INFO: rs-e2e-pts-score-bvrnj from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 10 23:59:09.100: INFO: rs-e2e-pts-score-kbgcm from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container e2e-pts-score ready: true, restart count 0 Jun 10 23:59:09.100: INFO: rs-e2e-pts-score-vbwqj from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:09.100: INFO: Container e2e-pts-score ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:59:23.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9319" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:14.199 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":10,"skipped":4184,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:59:23.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:59:23.245: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:59:23.254: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:59:23.259: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:59:23.270: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:59:23.270: INFO: Container discover ready: false, restart count 0 Jun 10 23:59:23.270: INFO: Container init ready: false, restart count 0 Jun 10 23:59:23.270: INFO: Container install ready: false, restart count 0 Jun 10 23:59:23.270: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:59:23.270: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:59:23.270: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:59:23.270: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:59:23.270: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:59:23.270: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:59:23.270: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:59:23.270: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:59:23.270: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:59:23.270: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:59:23.270: INFO: Container collectd ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:59:23.270: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:59:23.270: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:59:23.270: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:59:23.270: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container grafana ready: true, restart count 0 Jun 10 23:59:23.270: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:59:23.270: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:59:23.270: INFO: rs-e2e-pts-filter-8782d from sched-pred-9319 started at 2022-06-10 23:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 10 23:59:23.270: INFO: rs-e2e-pts-filter-tdcp2 from sched-pred-9319 started at 2022-06-10 23:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 10 23:59:23.270: INFO: test-pod from sched-priority-3463 started at 2022-06-10 23:58:58 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.270: INFO: Container test-pod ready: false, restart count 0 Jun 10 23:59:23.270: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:59:23.281: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:59:23.281: INFO: Container discover ready: false, restart count 0 Jun 10 23:59:23.281: INFO: Container init ready: false, restart count 0 Jun 10 23:59:23.281: INFO: Container install ready: false, restart count 0 Jun 10 23:59:23.281: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:59:23.281: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:59:23.281: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:59:23.281: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:59:23.281: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:59:23.281: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:59:23.281: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:59:23.281: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:59:23.281: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:59:23.281: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:59:23.281: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:59:23.281: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:59:23.281: INFO: Container collectd ready: true, restart count 0 Jun 10 23:59:23.281: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:59:23.281: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:59:23.281: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:59:23.281: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:59:23.281: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:59:23.281: INFO: rs-e2e-pts-filter-7bnql from sched-pred-9319 started at 2022-06-10 23:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 10 23:59:23.281: INFO: rs-e2e-pts-filter-bhq8z from sched-pred-9319 started at 2022-06-10 23:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container e2e-pts-filter ready: true, restart count 0 Jun 10 23:59:23.281: INFO: rs-e2e-pts-score-bvrnj from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container e2e-pts-score ready: false, restart count 0 Jun 10 23:59:23.281: INFO: rs-e2e-pts-score-kbgcm from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container e2e-pts-score ready: false, restart count 0 Jun 10 23:59:23.281: INFO: rs-e2e-pts-score-vbwqj from sched-priority-3463 started at 2022-06-10 23:58:52 +0000 UTC (1 container statuses recorded) Jun 10 23:59:23.281: INFO: Container e2e-pts-score ready: false, restart count 0 [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767d86eca9ad1], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Warning], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767d8ddbbfb37], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767dba9a98a65], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9074/filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767dbfff4b0c3], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767dc10ef59b1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 284.850551ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767dc17e136e8], Reason = [Created], Message = [Created container filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5] STEP: Considering event: Type = [Normal], Name = [filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5.16f767dc1ec11634], Reason = [Started], Message = [Started container filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5] STEP: Considering event: Type = [Normal], Name = [without-label.16f767d77e1a9cc8], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9074/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16f767d7d140d97c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16f767d7e33dc171], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 301.78183ms] STEP: Considering event: Type = [Normal], Name = [without-label.16f767d7e9bd3518], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f767d7f0c55616], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16f767d893a021d8], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-podc84f7d95-5011-47d0-aed1-a44bba636e29.16f767dca1070f08], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:59:46.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9074" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:23.180 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":11,"skipped":4282,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:59:46.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jun 10 23:59:46.435: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 10 23:59:46.444: INFO: Waiting for terminating namespaces to be deleted... Jun 10 23:59:46.453: INFO: Logging pods the apiserver thinks is on node node1 before test Jun 10 23:59:46.468: INFO: cmk-init-discover-node1-hlbt6 from kube-system started at 2022-06-10 20:11:42 +0000 UTC (3 container statuses recorded) Jun 10 23:59:46.468: INFO: Container discover ready: false, restart count 0 Jun 10 23:59:46.468: INFO: Container init ready: false, restart count 0 Jun 10 23:59:46.468: INFO: Container install ready: false, restart count 0 Jun 10 23:59:46.468: INFO: cmk-qjrhs from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:59:46.468: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:59:46.469: INFO: cmk-webhook-6c9d5f8578-n9w8j from kube-system started at 2022-06-10 20:12:30 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container cmk-webhook ready: true, restart count 0 Jun 10 23:59:46.469: INFO: kube-flannel-x926c from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:59:46.469: INFO: kube-multus-ds-amd64-4gckf from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:59:46.469: INFO: kube-proxy-5bkrr from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container kube-proxy ready: true, restart count 1 Jun 10 23:59:46.469: INFO: nginx-proxy-node1 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:59:46.469: INFO: node-feature-discovery-worker-9xsdt from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:59:46.469: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-k4f5v from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:59:46.469: INFO: collectd-kpj5z from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:59:46.469: INFO: Container collectd ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:59:46.469: INFO: node-exporter-tk8f9 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:59:46.469: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:59:46.469: INFO: prometheus-k8s-0 from monitoring started at 2022-06-10 20:13:45 +0000 UTC (4 container statuses recorded) Jun 10 23:59:46.469: INFO: Container config-reloader ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container grafana ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Container prometheus ready: true, restart count 1 Jun 10 23:59:46.469: INFO: tas-telemetry-aware-scheduling-84ff454dfb-lb2mn from monitoring started at 2022-06-10 20:16:40 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.469: INFO: Container tas-extender ready: true, restart count 0 Jun 10 23:59:46.469: INFO: Logging pods the apiserver thinks is on node node2 before test Jun 10 23:59:46.476: INFO: cmk-init-discover-node2-jxvbr from kube-system started at 2022-06-10 20:12:04 +0000 UTC (3 container statuses recorded) Jun 10 23:59:46.476: INFO: Container discover ready: false, restart count 0 Jun 10 23:59:46.476: INFO: Container init ready: false, restart count 0 Jun 10 23:59:46.476: INFO: Container install ready: false, restart count 0 Jun 10 23:59:46.476: INFO: cmk-zpstc from kube-system started at 2022-06-10 20:12:29 +0000 UTC (2 container statuses recorded) Jun 10 23:59:46.476: INFO: Container nodereport ready: true, restart count 0 Jun 10 23:59:46.476: INFO: Container reconcile ready: true, restart count 0 Jun 10 23:59:46.476: INFO: kube-flannel-8jl6m from kube-system started at 2022-06-10 20:00:20 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kube-flannel ready: true, restart count 2 Jun 10 23:59:46.476: INFO: kube-multus-ds-amd64-nj866 from kube-system started at 2022-06-10 20:00:29 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kube-multus ready: true, restart count 1 Jun 10 23:59:46.476: INFO: kube-proxy-4clxz from kube-system started at 2022-06-10 19:59:24 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kube-proxy ready: true, restart count 2 Jun 10 23:59:46.476: INFO: kubernetes-dashboard-785dcbb76d-7pmgn from kube-system started at 2022-06-10 20:01:00 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kubernetes-dashboard ready: true, restart count 1 Jun 10 23:59:46.476: INFO: kubernetes-metrics-scraper-5558854cb-pf6tn from kube-system started at 2022-06-10 20:01:01 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Jun 10 23:59:46.476: INFO: nginx-proxy-node2 from kube-system started at 2022-06-10 19:59:19 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container nginx-proxy ready: true, restart count 2 Jun 10 23:59:46.476: INFO: node-feature-discovery-worker-s9mwk from kube-system started at 2022-06-10 20:08:09 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container nfd-worker ready: true, restart count 0 Jun 10 23:59:46.476: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-z4m46 from kube-system started at 2022-06-10 20:09:21 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kube-sriovdp ready: true, restart count 0 Jun 10 23:59:46.476: INFO: collectd-srmjh from monitoring started at 2022-06-10 20:17:30 +0000 UTC (3 container statuses recorded) Jun 10 23:59:46.476: INFO: Container collectd ready: true, restart count 0 Jun 10 23:59:46.476: INFO: Container collectd-exporter ready: true, restart count 0 Jun 10 23:59:46.476: INFO: Container rbac-proxy ready: true, restart count 0 Jun 10 23:59:46.476: INFO: node-exporter-trpg7 from monitoring started at 2022-06-10 20:13:33 +0000 UTC (2 container statuses recorded) Jun 10 23:59:46.476: INFO: Container kube-rbac-proxy ready: true, restart count 0 Jun 10 23:59:46.476: INFO: Container node-exporter ready: true, restart count 0 Jun 10 23:59:46.476: INFO: filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5 from sched-pred-9074 started at 2022-06-10 23:59:41 +0000 UTC (1 container statuses recorded) Jun 10 23:59:46.476: INFO: Container filler-pod-d5a6b2c6-8103-4e74-9cb9-799c2389eab5 ready: true, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-11d7af95-aeeb-4230-83a5-79d46275e3a5=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-a0a8f57a-8b92-4bc8-ae65-7ba3bc09a389 testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-a0a8f57a-8b92-4bc8-ae65-7ba3bc09a389 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-a0a8f57a-8b92-4bc8-ae65-7ba3bc09a389 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-11d7af95-aeeb-4230-83a5-79d46275e3a5=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 10 23:59:54.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9438" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.179 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":12,"skipped":4496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Jun 10 23:59:54.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jun 10 23:59:54.629: INFO: Waiting up to 1m0s for all nodes to be ready Jun 11 00:00:54.687: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jun 11 00:01:32.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-711" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:98.400 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":13,"skipped":5000,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJun 11 00:01:33.009: INFO: Running AfterSuite actions on all nodes Jun 11 00:01:33.009: INFO: Running AfterSuite actions on node 1 Jun 11 00:01:33.009: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":13,"skipped":5760,"failed":0} Ran 13 of 5773 Specs in 514.606 seconds SUCCESS! -- 13 Passed | 0 Failed | 0 Pending | 5760 Skipped PASS Ginkgo ran 1 suite in 8m35.969626529s Test Suite Passed