I1030 05:09:06.091969 22 e2e.go:129] Starting e2e run "59804d86-879f-425b-8e00-c4ce7ca59bb7" on Ginkgo node 1 {"msg":"Test Suite starting","total":13,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1635570544 - Will randomize all specs Will run 13 of 5770 specs Oct 30 05:09:06.106: INFO: >>> kubeConfig: /root/.kube/config Oct 30 05:09:06.111: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 30 05:09:06.139: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:09:06.199: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:09:06.199: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:09:06.199: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:09:06.199: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:09:06.199: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 30 05:09:06.216: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 30 05:09:06.216: INFO: e2e test version: v1.21.5 Oct 30 05:09:06.218: INFO: kube-apiserver version: v1.21.1 Oct 30 05:09:06.218: INFO: >>> kubeConfig: /root/.kube/config Oct 30 05:09:06.225: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:09:06.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority W1030 05:09:06.252988 22 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 30 05:09:06.253: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 30 05:09:06.256: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:09:06.258: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:10:06.312: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:10:06.315: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:10:06.333: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:10:06.333: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:10:06.333: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:10:06.333: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:10:06.349: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:10:06.349: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:10:06.349: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:10:06.349: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:10:06.349: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 STEP: Trying to launch a pod with a label to get a node which can launch it. STEP: Verifying the node has a label kubernetes.io/hostname Oct 30 05:10:10.391: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:10:10.391: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:10:10.391: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:10.391: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:10:10.391: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:10:10.402: INFO: Waiting for running... Oct 30 05:10:10.406: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:10:15.478: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.478: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:10:15.478: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:10:15.479: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Pod for on the node: pod-with-label-security-s1, Cpu: 100, Mem: 209715200 Oct 30 05:10:15.479: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:10:15.479: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 STEP: Trying to launch the pod with podAntiAffinity. STEP: Wait the pod becomes running STEP: Verify the pod was scheduled to the expected node. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:10:33.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-6489" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:87.296 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be scheduled to node that don't match the PodAntiAffinity terms /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:181 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms","total":13,"completed":1,"skipped":453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:10:33.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:10:33.548: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:10:33.556: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:10:33.559: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:10:33.566: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:10:33.566: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:10:33.566: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:10:33.566: INFO: Container discover ready: false, restart count 0 Oct 30 05:10:33.566: INFO: Container init ready: false, restart count 0 Oct 30 05:10:33.566: INFO: Container install ready: false, restart count 0 Oct 30 05:10:33.566: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:10:33.566: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:10:33.566: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:10:33.566: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:10:33.566: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:10:33.566: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:10:33.566: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:10:33.566: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:10:33.566: INFO: Container collectd ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:10:33.566: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:10:33.566: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:10:33.566: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:10:33.566: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container grafana ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:10:33.566: INFO: pod-with-pod-antiaffinity from sched-priority-6489 started at 2021-10-30 05:10:15 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.566: INFO: Container pod-with-pod-antiaffinity ready: true, restart count 0 Oct 30 05:10:33.566: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:10:33.575: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:10:33.575: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:10:33.575: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:10:33.575: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:10:33.575: INFO: Container discover ready: false, restart count 0 Oct 30 05:10:33.575: INFO: Container init ready: false, restart count 0 Oct 30 05:10:33.575: INFO: Container install ready: false, restart count 0 Oct 30 05:10:33.575: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:10:33.575: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:10:33.575: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:10:33.575: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:10:33.575: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:10:33.575: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:10:33.575: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:10:33.575: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:10:33.575: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:10:33.575: INFO: Container collectd ready: true, restart count 0 Oct 30 05:10:33.575: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:10:33.575: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:10:33.575: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:10:33.575: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:10:33.575: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:10:33.575: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:10:33.575: INFO: pod-with-label-security-s1 from sched-priority-6489 started at 2021-10-30 05:10:06 +0000 UTC (1 container statuses recorded) Oct 30 05:10:33.575: INFO: Container pod-with-label-security-s1 ready: true, restart count 0 [It] validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-ac5c4fe8-d811-4505-9cc6-0e7591079f29=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-8faf1881-f149-4efe-ba51-fcaaa2a62343 testing-label-value STEP: Trying to relaunch the pod, still no tolerations. STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d35cd1bf85], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9139/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3b1039cd5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3c52f6221], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 338.399056ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3ccb0d3ea], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3d36c64f8], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d44bb2585a], Reason = [Killing], Message = [Stopping container without-toleration] STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b2b6d44e35c6a0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-ac5c4fe8-d811-4505-9cc6-0e7591079f29: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Removing taint off the node STEP: Considering event: Type = [Warning], Name = [still-no-tolerations.16b2b6d44e35c6a0], Reason = [FailedScheduling], Message = [0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had taint {kubernetes.io/e2e-taint-key-ac5c4fe8-d811-4505-9cc6-0e7591079f29: testing-taint-value}, that the pod didn't tolerate, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d35cd1bf85], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9139/without-toleration to node1] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3b1039cd5], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3c52f6221], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 338.399056ms] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3ccb0d3ea], Reason = [Created], Message = [Created container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d3d36c64f8], Reason = [Started], Message = [Started container without-toleration] STEP: Considering event: Type = [Normal], Name = [without-toleration.16b2b6d44bb2585a], Reason = [Killing], Message = [Stopping container without-toleration] STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-ac5c4fe8-d811-4505-9cc6-0e7591079f29=testing-taint-value:NoSchedule STEP: Considering event: Type = [Normal], Name = [still-no-tolerations.16b2b6d48d3281d7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9139/still-no-tolerations to node1] STEP: removing the label kubernetes.io/e2e-label-key-8faf1881-f149-4efe-ba51-fcaaa2a62343 off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-8faf1881-f149-4efe-ba51-fcaaa2a62343 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-ac5c4fe8-d811-4505-9cc6-0e7591079f29=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:10:39.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9139" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:6.174 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:619 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching","total":13,"completed":2,"skipped":569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:10:39.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Oct 30 05:10:39.736: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:11:39.803: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:308 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-preemption for this test on the 2 nodes. STEP: Apply 10 fake resource to node node2. STEP: Apply 10 fake resource to node node1. [It] validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 STEP: Create 1 High Pod and 3 Low Pods to occupy 9/10 of fake resources on both nodes. STEP: Create 1 Medium Pod with TopologySpreadConstraints STEP: Verify there are 3 Pods left in this namespace STEP: Pod "high" is as expected to be running. STEP: Pod "low-1" is as expected to be running. STEP: Pod "medium" is as expected to be running. [AfterEach] PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:326 STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption STEP: removing the label kubernetes.io/e2e-pts-preemption off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-preemption [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:12:16.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1770" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 • [SLOW TEST:96.384 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Preemption /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:302 validates proper pods are preempted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:338 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted","total":13,"completed":3,"skipped":998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:12:16.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:12:16.115: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:12:16.123: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:12:16.125: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:12:16.136: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:12:16.136: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:12:16.136: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:12:16.136: INFO: Container discover ready: false, restart count 0 Oct 30 05:12:16.136: INFO: Container init ready: false, restart count 0 Oct 30 05:12:16.136: INFO: Container install ready: false, restart count 0 Oct 30 05:12:16.136: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:12:16.136: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:12:16.136: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:12:16.136: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:12:16.136: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:12:16.136: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:12:16.136: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:12:16.136: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:12:16.136: INFO: Container collectd ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:12:16.136: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:12:16.136: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:12:16.136: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:12:16.136: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container grafana ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:12:16.136: INFO: low-1 from sched-preemption-1770 started at 2021-10-30 05:11:51 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container low-1 ready: true, restart count 0 Oct 30 05:12:16.136: INFO: medium from sched-preemption-1770 started at 2021-10-30 05:12:13 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.136: INFO: Container medium ready: true, restart count 0 Oct 30 05:12:16.136: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:12:16.143: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:12:16.143: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:12:16.143: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:12:16.143: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:12:16.143: INFO: Container discover ready: false, restart count 0 Oct 30 05:12:16.143: INFO: Container init ready: false, restart count 0 Oct 30 05:12:16.143: INFO: Container install ready: false, restart count 0 Oct 30 05:12:16.143: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:12:16.143: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:12:16.143: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:12:16.143: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:12:16.143: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:12:16.143: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:12:16.143: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:12:16.143: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:12:16.143: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:12:16.143: INFO: Container collectd ready: true, restart count 0 Oct 30 05:12:16.143: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:12:16.143: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:12:16.143: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:12:16.143: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:12:16.143: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:12:16.143: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:12:16.143: INFO: high from sched-preemption-1770 started at 2021-10-30 05:11:47 +0000 UTC (1 container statuses recorded) Oct 30 05:12:16.143: INFO: Container high ready: true, restart count 0 [It] validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b2180073-f430-4677-988b-ad1bebb00555 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-b2180073-f430-4677-988b-ad1bebb00555 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-b2180073-f430-4677-988b-ad1bebb00555 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:12:24.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1285" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.123 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that required NodeAffinity setting is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:528 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching","total":13,"completed":4,"skipped":1232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:12:24.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:12:24.238: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:12:24.246: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:12:24.248: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:12:24.257: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:12:24.257: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:12:24.257: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:12:24.257: INFO: Container discover ready: false, restart count 0 Oct 30 05:12:24.257: INFO: Container init ready: false, restart count 0 Oct 30 05:12:24.257: INFO: Container install ready: false, restart count 0 Oct 30 05:12:24.257: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:12:24.257: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:12:24.257: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:12:24.257: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:12:24.257: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:12:24.257: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:12:24.257: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:12:24.257: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:12:24.257: INFO: Container collectd ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:12:24.257: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:12:24.257: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:12:24.257: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:12:24.257: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container grafana ready: true, restart count 0 Oct 30 05:12:24.257: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:12:24.257: INFO: low-1 from sched-preemption-1770 started at 2021-10-30 05:11:51 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container low-1 ready: false, restart count 0 Oct 30 05:12:24.257: INFO: medium from sched-preemption-1770 started at 2021-10-30 05:12:13 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.257: INFO: Container medium ready: false, restart count 0 Oct 30 05:12:24.257: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:12:24.273: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:12:24.273: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:12:24.273: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:12:24.273: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:12:24.273: INFO: Container discover ready: false, restart count 0 Oct 30 05:12:24.273: INFO: Container init ready: false, restart count 0 Oct 30 05:12:24.273: INFO: Container install ready: false, restart count 0 Oct 30 05:12:24.273: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:12:24.273: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:12:24.273: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:12:24.273: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:12:24.273: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:12:24.273: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:12:24.273: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:12:24.273: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:12:24.273: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:12:24.273: INFO: Container collectd ready: true, restart count 0 Oct 30 05:12:24.273: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:12:24.273: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:12:24.273: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:12:24.273: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:12:24.273: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:12:24.273: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:12:24.273: INFO: with-labels from sched-pred-1285 started at 2021-10-30 05:12:20 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container with-labels ready: true, restart count 0 Oct 30 05:12:24.273: INFO: high from sched-preemption-1770 started at 2021-10-30 05:11:47 +0000 UTC (1 container statuses recorded) Oct 30 05:12:24.273: INFO: Container high ready: false, restart count 0 [It] validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 STEP: Trying to launch a pod without a toleration to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random taint on the found node. STEP: verifying the node has the taint kubernetes.io/e2e-taint-key-1d01e8bf-291a-4d10-8f2b-4091c38ac6ec=testing-taint-value:NoSchedule STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-label-key-876aa18c-b353-4996-9c08-212bd72ee4dc testing-label-value STEP: Trying to relaunch the pod, now with tolerations. STEP: removing the label kubernetes.io/e2e-label-key-876aa18c-b353-4996-9c08-212bd72ee4dc off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-label-key-876aa18c-b353-4996-9c08-212bd72ee4dc STEP: verifying the node doesn't have the taint kubernetes.io/e2e-taint-key-1d01e8bf-291a-4d10-8f2b-4091c38ac6ec=testing-taint-value:NoSchedule [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:12:32.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-959" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.171 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that taints-tolerations is respected if matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:576 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching","total":13,"completed":5,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:12:32.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:12:32.414: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:13:32.466: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:13:32.468: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:13:32.487: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:13:32.487: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:13:32.487: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:13:32.487: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:13:32.511: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:13:32.511: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:13:32.511: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.511: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:13:32.511: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 Oct 30 05:13:32.526: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:13:32.526: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:13:32.526: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:13:32.526: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:13:32.526: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:13:32.542: INFO: Waiting for running... Oct 30 05:13:32.543: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:13:37.612: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:13:37.612: INFO: Node: node1, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:13:37.612: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Pod for on the node: ac191f8d-3269-4656-a8c5-93f293746043-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:13:37.612: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:13:37.612: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884628480, memFraction: 1 STEP: Create a RC, with 0 replicas STEP: Trying to apply avoidPod annotations on the first node. STEP: Scale the RC: scheduler-priority-avoid-pod to len(nodeList.Item)-1 : 1. STEP: Scaling ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1581 to 1 STEP: Verify the pods should not scheduled to the node: node1 STEP: deleting ReplicationController scheduler-priority-avoid-pod in namespace sched-priority-1581, will wait for the garbage collector to delete the pods Oct 30 05:13:48.804: INFO: Deleting ReplicationController scheduler-priority-avoid-pod took: 4.278455ms Oct 30 05:13:48.905: INFO: Terminating ReplicationController scheduler-priority-avoid-pod pods took: 101.090256ms [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:14:13.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-1581" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:101.142 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should avoid nodes that have avoidPod annotation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:265 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation","total":13,"completed":6,"skipped":1519,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:14:13.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:14:13.568: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:14:13.575: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:14:13.578: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:14:13.587: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:14:13.587: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:13.587: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:14:13.587: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:13.587: INFO: Container init ready: false, restart count 0 Oct 30 05:14:13.587: INFO: Container install ready: false, restart count 0 Oct 30 05:14:13.587: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:14:13.587: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:13.587: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:13.587: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:14:13.587: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:13.587: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:13.587: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.587: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:13.587: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:14:13.587: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:13.587: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:14:13.587: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:13.587: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:14:13.587: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container grafana ready: true, restart count 0 Oct 30 05:14:13.587: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:14:13.587: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:14:13.594: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:14:13.594: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:13.594: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:13.594: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:14:13.594: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:13.594: INFO: Container init ready: false, restart count 0 Oct 30 05:14:13.594: INFO: Container install ready: false, restart count 0 Oct 30 05:14:13.594: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:14:13.594: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:14:13.594: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:13.594: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:13.594: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:14:13.594: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:13.594: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:13.594: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:13.594: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:14:13.594: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:13.594: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:13.594: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:13.594: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:14:13.594: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:13.594: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:13.594: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:14:13.594: INFO: Container tas-extender ready: true, restart count 0 [It] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 Oct 30 05:14:13.629: INFO: Pod cmk-89lqq requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod cmk-8bpbf requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod cmk-webhook-6c9d5f8578-ffk66 requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod kube-flannel-f6s5v requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod kube-flannel-phg88 requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod kube-multus-ds-amd64-68wrz requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod kube-multus-ds-amd64-7tvbl requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod kube-proxy-76285 requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod kube-proxy-z5hqt requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod kubernetes-dashboard-785dcbb76d-pbjjt requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod kubernetes-metrics-scraper-5558854cb-5rmjw requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod nginx-proxy-node1 requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod nginx-proxy-node2 requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod node-feature-discovery-worker-h6lcp requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod node-feature-discovery-worker-w5vdb requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod sriov-net-dp-kube-sriov-device-plugin-amd64-t789r requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod collectd-d45rv requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod collectd-flvhl requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod node-exporter-256wm requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod node-exporter-r77s4 requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Pod prometheus-k8s-0 requesting local ephemeral resource =0 on Node node1 Oct 30 05:14:13.629: INFO: Pod tas-telemetry-aware-scheduling-84ff454dfb-989mh requesting local ephemeral resource =0 on Node node2 Oct 30 05:14:13.629: INFO: Using pod capacity: 40542413347 Oct 30 05:14:13.629: INFO: Node: node2 has local ephemeral resource allocatable: 405424133473 Oct 30 05:14:13.629: INFO: Node: node1 has local ephemeral resource allocatable: 405424133473 STEP: Starting additional 20 Pods to fully saturate the cluster local ephemeral resource and trying to start another one Oct 30 05:14:13.816: INFO: Waiting for running... STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b70698971dce], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-0 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b707186153d1], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b7072c3a403a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 332.974288ms] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b70747d4162a], Reason = [Created], Message = [Created container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-0.16b2b707e848f83b], Reason = [Started], Message = [Started container overcommit-0] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b70698f8d371], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-1 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b70717dd75f2], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b7073d750668], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 630.681958ms] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b70759490ca8], Reason = [Created], Message = [Created container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-1.16b2b707ba3cb946], Reason = [Started], Message = [Started container overcommit-1] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b7069de3dd20], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-10 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b7088bab87ef], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b708a086f064], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 349.905046ms] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b708b04b83ef], Reason = [Created], Message = [Created container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-10.16b2b708ba7b32a9], Reason = [Started], Message = [Started container overcommit-10] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b7069ee3c139], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-11 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b708680de613], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b70881c052c4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 431.117544ms] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b708b3bf2a1f], Reason = [Created], Message = [Created container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-11.16b2b708bf87cdc0], Reason = [Started], Message = [Started container overcommit-11] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b7069effb9c0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-12 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b70790ebfa74], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b707ace5b46f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 469.342085ms] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b707c5ca78d8], Reason = [Created], Message = [Created container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-12.16b2b707f1c8b8d6], Reason = [Started], Message = [Started container overcommit-12] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b7069f82ee90], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-13 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b707fdf4ee95], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b7081c0c9ecb], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 504.859769ms] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b7082dce0dd1], Reason = [Created], Message = [Created container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-13.16b2b7088d58440d], Reason = [Started], Message = [Started container overcommit-13] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b706a012f36d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-14 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b708aa459589], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b708e3520ef4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 957.106939ms] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b708e9e595ff], Reason = [Created], Message = [Created container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-14.16b2b708f2290d9f], Reason = [Started], Message = [Started container overcommit-14] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b706a09ae050], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-15 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b708a642a094], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b708bb29b701], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 350.677354ms] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b708c2ff2b26], Reason = [Created], Message = [Created container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-15.16b2b708ca4c0df9], Reason = [Started], Message = [Started container overcommit-15] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b706a123b5c0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-16 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b708a47e525c], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b708dc4dc9b1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 936.337751ms] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b708e39f2435], Reason = [Created], Message = [Created container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-16.16b2b708ea34a564], Reason = [Started], Message = [Started container overcommit-16] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b706a1c16fee], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-17 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b708451ec0c6], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b708600289c0], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 451.128581ms] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b70884b09aa9], Reason = [Created], Message = [Created container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-17.16b2b708a699e124], Reason = [Started], Message = [Started container overcommit-17] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b706a242be24], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-18 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b708a1cf367a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b708b42cf666], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 308.125502ms] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b708bb84164e], Reason = [Created], Message = [Created container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-18.16b2b708c231c99c], Reason = [Started], Message = [Started container overcommit-18] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b706a2de596a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-19 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b708a4cd9217], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b70916e2b745], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.913983389s] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b7091d550310], Reason = [Created], Message = [Created container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-19.16b2b709241d4766], Reason = [Started], Message = [Started container overcommit-19] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b7069976d6cb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-2 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b7074b88c411], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b70767610c7d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 467.148635ms] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b7077e751712], Reason = [Created], Message = [Created container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-2.16b2b707b982926b], Reason = [Started], Message = [Started container overcommit-2] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b7069a06903c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-3 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b708a2cf2aa4], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b708c791b97d], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 616.719978ms] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b708ce4ef34a], Reason = [Created], Message = [Created container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-3.16b2b708d51a32b6], Reason = [Started], Message = [Started container overcommit-3] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b7069a906d47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-4 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b708a4bd9c84], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b7090309e01f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.582046313s] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b70909d9d1fc], Reason = [Created], Message = [Created container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-4.16b2b709102d6254], Reason = [Started], Message = [Started container overcommit-4] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b7069b220cc9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-5 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b70822fc54df], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b7085c125fb4], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 957.739238ms] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b7086a776d4c], Reason = [Created], Message = [Created container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-5.16b2b708adc705dd], Reason = [Started], Message = [Started container overcommit-5] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b7069bad49ed], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-6 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b707c857203f], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b707de749ab1], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 371.021577ms] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b708029ffee0], Reason = [Created], Message = [Created container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-6.16b2b7085cf4e463], Reason = [Started], Message = [Started container overcommit-6] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b7069c3a883c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-7 to node1] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b708a49c97ac], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b708ef560fa3], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 1.253663622s] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b708f6970c75], Reason = [Created], Message = [Created container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-7.16b2b708fd1cdca0], Reason = [Started], Message = [Started container overcommit-7] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b7069cbfcbea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-8 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b708a64365b8], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b708d0973e8a], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 710.127728ms] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b708d71e1cb2], Reason = [Created], Message = [Created container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-8.16b2b708ded1e83c], Reason = [Started], Message = [Started container overcommit-8] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b7069d47dd6d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5046/overcommit-9 to node2] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b70822fa52ab], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b70844fccc4f], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 570.581056ms] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b708753db335], Reason = [Created], Message = [Created container overcommit-9] STEP: Considering event: Type = [Normal], Name = [overcommit-9.16b2b708af0459b1], Reason = [Started], Message = [Started container overcommit-9] STEP: Considering event: Type = [Warning], Name = [additional-pod.16b2b70a252525b2], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient ephemeral-storage, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:14:29.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5046" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.363 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:120 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]","total":13,"completed":7,"skipped":2197,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:14:29.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:14:29.937: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:14:29.945: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:14:29.947: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:14:29.962: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:14:29.962: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:29.962: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:14:29.962: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:29.962: INFO: Container init ready: false, restart count 0 Oct 30 05:14:29.962: INFO: Container install ready: false, restart count 0 Oct 30 05:14:29.962: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:14:29.962: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:29.962: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:29.962: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:14:29.962: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:29.962: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:29.962: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.962: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:29.962: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:14:29.962: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:29.962: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:14:29.962: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:29.962: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:14:29.962: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container grafana ready: true, restart count 0 Oct 30 05:14:29.962: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:14:29.963: INFO: overcommit-1 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-1 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-12 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-12 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-16 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-16 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-17 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-17 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-18 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-18 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-19 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-19 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-2 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-2 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-3 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-3 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-4 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-4 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: overcommit-7 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.963: INFO: Container overcommit-7 ready: true, restart count 0 Oct 30 05:14:29.963: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:14:29.974: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:14:29.974: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:29.974: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:29.974: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:14:29.974: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:29.974: INFO: Container init ready: false, restart count 0 Oct 30 05:14:29.974: INFO: Container install ready: false, restart count 0 Oct 30 05:14:29.974: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:14:29.974: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:14:29.974: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:29.974: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:29.974: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:14:29.974: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:29.974: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:29.974: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:29.974: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:14:29.974: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:29.974: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:29.974: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:29.974: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:14:29.974: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:29.974: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:29.974: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-0 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-0 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-10 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-10 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-11 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-11 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-13 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-13 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-14 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-14 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-15 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-15 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-5 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-5 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-6 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-6 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-8 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-8 ready: true, restart count 0 Oct 30 05:14:29.974: INFO: overcommit-9 from sched-pred-5046 started at 2021-10-30 05:14:13 +0000 UTC (1 container statuses recorded) Oct 30 05:14:29.974: INFO: Container overcommit-9 ready: true, restart count 0 [BeforeEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:720 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-filter for this test on the 2 nodes. [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 Oct 30 05:14:48.060: FAIL: Pods are not distributed as expected on node "node2" Expected : 3 to equal : 2 Full Stack Trace k8s.io/kubernetes/test/e2e/scheduling.glob..func4.14.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:789 +0x768 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0017a1e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0017a1e00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0017a1e00, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:728 STEP: removing the label kubernetes.io/e2e-pts-filter off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter STEP: removing the label kubernetes.io/e2e-pts-filter off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-filter [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "sched-pred-5774". STEP: Found 39 events. Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:29 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/without-label to node2 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:31 +0000 UTC - event for without-label: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/pause:3.4.1" Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:31 +0000 UTC - event for without-label: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 302.868595ms Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:31 +0000 UTC - event for without-label: {kubelet node2} Created: Created container without-label Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:32 +0000 UTC - event for without-label: {kubelet node2} Started: Started container without-label Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:34 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/without-label to node1 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:34 +0000 UTC - event for without-label: {kubelet node2} Killing: Stopping container without-label Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:35 +0000 UTC - event for without-label: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/pause:3.4.1" Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:35 +0000 UTC - event for without-label: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 341.652658ms Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:36 +0000 UTC - event for without-label: {kubelet node1} Created: Created container without-label Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:37 +0000 UTC - event for without-label: {kubelet node1} Started: Started container without-label Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter: {replicaset-controller } SuccessfulCreate: Created pod: rs-e2e-pts-filter-l2s7l Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter: {replicaset-controller } SuccessfulCreate: Created pod: rs-e2e-pts-filter-fvfqj Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter: {replicaset-controller } SuccessfulCreate: Created pod: rs-e2e-pts-filter-4k8w5 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter: {replicaset-controller } SuccessfulCreate: Created pod: rs-e2e-pts-filter-gflvz Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter: {replicaset-controller } SuccessfulCreate: Created pod: rs-e2e-pts-filter-72v7q Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter-4k8w5: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/rs-e2e-pts-filter-4k8w5 to node2 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter-72v7q: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/rs-e2e-pts-filter-72v7q to node1 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter-fvfqj: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/rs-e2e-pts-filter-fvfqj to node2 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter-gflvz: {kubelet node2} NodeAffinity: Predicate NodeAffinity failed Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter-gflvz: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/rs-e2e-pts-filter-gflvz to node2 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for rs-e2e-pts-filter-l2s7l: {default-scheduler } Scheduled: Successfully assigned sched-pred-5774/rs-e2e-pts-filter-l2s7l to node1 Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:42 +0000 UTC - event for without-label: {kubelet node1} Killing: Stopping container without-label Oct 30 05:14:48.091: INFO: At 2021-10-30 05:14:44 +0000 UTC - event for rs-e2e-pts-filter-4k8w5: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/pause:3.4.1" Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:44 +0000 UTC - event for rs-e2e-pts-filter-4k8w5: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 544.049353ms Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:44 +0000 UTC - event for rs-e2e-pts-filter-fvfqj: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 315.354159ms Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:44 +0000 UTC - event for rs-e2e-pts-filter-fvfqj: {kubelet node2} Started: Started container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:44 +0000 UTC - event for rs-e2e-pts-filter-fvfqj: {kubelet node2} Created: Created container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:44 +0000 UTC - event for rs-e2e-pts-filter-fvfqj: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/pause:3.4.1" Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:45 +0000 UTC - event for rs-e2e-pts-filter-4k8w5: {kubelet node2} Created: Created container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:45 +0000 UTC - event for rs-e2e-pts-filter-4k8w5: {kubelet node2} Started: Started container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:45 +0000 UTC - event for rs-e2e-pts-filter-72v7q: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/pause:3.4.1" Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:45 +0000 UTC - event for rs-e2e-pts-filter-l2s7l: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/pause:3.4.1" Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:46 +0000 UTC - event for rs-e2e-pts-filter-72v7q: {kubelet node1} Started: Started container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:46 +0000 UTC - event for rs-e2e-pts-filter-72v7q: {kubelet node1} Created: Created container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:46 +0000 UTC - event for rs-e2e-pts-filter-72v7q: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 511.45187ms Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:46 +0000 UTC - event for rs-e2e-pts-filter-l2s7l: {kubelet node1} Started: Started container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:46 +0000 UTC - event for rs-e2e-pts-filter-l2s7l: {kubelet node1} Created: Created container e2e-pts-filter Oct 30 05:14:48.092: INFO: At 2021-10-30 05:14:46 +0000 UTC - event for rs-e2e-pts-filter-l2s7l: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 278.168586ms Oct 30 05:14:48.095: INFO: POD NODE PHASE GRACE CONDITIONS Oct 30 05:14:48.095: INFO: rs-e2e-pts-filter-4k8w5 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:42 +0000 UTC }] Oct 30 05:14:48.095: INFO: rs-e2e-pts-filter-72v7q node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:42 +0000 UTC }] Oct 30 05:14:48.095: INFO: rs-e2e-pts-filter-fvfqj node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:42 +0000 UTC }] Oct 30 05:14:48.095: INFO: rs-e2e-pts-filter-gflvz node2 Failed [] Oct 30 05:14:48.095: INFO: rs-e2e-pts-filter-l2s7l node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-30 05:14:42 +0000 UTC }] Oct 30 05:14:48.095: INFO: Oct 30 05:14:48.099: INFO: Logging node info for node master1 Oct 30 05:14:48.102: INFO: Node Info: &Node{ObjectMeta:{master1 b47c04d5-47a7-4a95-8e97-481e6e60af54 171566 0 2021-10-29 21:05:34 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:05:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-29 21:05:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-29 21:13:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:27 +0000 UTC,LastTransitionTime:2021-10-29 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:44 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:44 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:44 +0000 UTC,LastTransitionTime:2021-10-29 21:05:32 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 05:14:44 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5d3ed60c561e427db72df14bd9006ed0,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:01b9d6bc-4126-4864-a1df-901a1bee4906,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 05:14:48.103: INFO: Logging kubelet events for node master1 Oct 30 05:14:48.104: INFO: Logging pods the kubelet thinks is on node master1 Oct 30 05:14:48.121: INFO: kube-multus-ds-amd64-wgkfq started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:48.121: INFO: kube-apiserver-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 05:14:48.121: INFO: kube-controller-manager-master1 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 30 05:14:48.121: INFO: kube-flannel-d4pmt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Init container install-cni ready: true, restart count 0 Oct 30 05:14:48.121: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:14:48.121: INFO: container-registry-65d7c44b96-zzkfl started at 2021-10-29 21:12:56 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.121: INFO: Container docker-registry ready: true, restart count 0 Oct 30 05:14:48.121: INFO: Container nginx ready: true, restart count 0 Oct 30 05:14:48.121: INFO: node-exporter-fv84w started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.121: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.121: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.121: INFO: kube-scheduler-master1 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Container kube-scheduler ready: true, restart count 0 Oct 30 05:14:48.121: INFO: kube-proxy-z5k8p started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:48.121: INFO: coredns-8474476ff8-lczbr started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.121: INFO: Container coredns ready: true, restart count 1 W1030 05:14:48.135303 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 05:14:48.212: INFO: Latency metrics for node master1 Oct 30 05:14:48.212: INFO: Logging node info for node master2 Oct 30 05:14:48.215: INFO: Node Info: &Node{ObjectMeta:{master2 208792d3-d365-4ddb-83d4-10e6e818079c 171512 0 2021-10-29 21:06:06 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-29 21:18:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:19 +0000 UTC,LastTransitionTime:2021-10-29 21:11:19 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:06:06 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 05:14:39 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:12290c1916d84ddda20431c28083da6a,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:314e82b8-9747-4131-b883-220496309995,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 05:14:48.216: INFO: Logging kubelet events for node master2 Oct 30 05:14:48.218: INFO: Logging pods the kubelet thinks is on node master2 Oct 30 05:14:48.242: INFO: kube-proxy-5gz4v started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.242: INFO: Container kube-proxy ready: true, restart count 2 Oct 30 05:14:48.242: INFO: kube-flannel-qvqll started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 05:14:48.242: INFO: Init container install-cni ready: true, restart count 2 Oct 30 05:14:48.242: INFO: Container kube-flannel ready: true, restart count 1 Oct 30 05:14:48.242: INFO: kube-multus-ds-amd64-brkpk started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.242: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:48.242: INFO: node-exporter-lc9kk started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.242: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.242: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.242: INFO: kube-apiserver-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.242: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 05:14:48.242: INFO: kube-controller-manager-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.242: INFO: Container kube-controller-manager ready: true, restart count 3 Oct 30 05:14:48.242: INFO: kube-scheduler-master2 started at 2021-10-29 21:11:09 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.242: INFO: Container kube-scheduler ready: true, restart count 2 W1030 05:14:48.256241 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 05:14:48.314: INFO: Latency metrics for node master2 Oct 30 05:14:48.314: INFO: Logging node info for node master3 Oct 30 05:14:48.316: INFO: Node Info: &Node{ObjectMeta:{master3 168f1589-e029-47ae-b194-10215fc22d6a 171505 0 2021-10-29 21:06:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-29 21:06:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-29 21:08:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-29 21:16:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-29 21:16:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:36 +0000 UTC,LastTransitionTime:2021-10-29 21:11:36 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:06:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 05:14:38 +0000 UTC,LastTransitionTime:2021-10-29 21:08:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:de18dcb6cb4c493e9f4d987da2c8b3fd,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:89235c4b-b1f5-4716-bbd7-18b41c0bde74,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 05:14:48.317: INFO: Logging kubelet events for node master3 Oct 30 05:14:48.319: INFO: Logging pods the kubelet thinks is on node master3 Oct 30 05:14:48.338: INFO: kube-apiserver-master3 started at 2021-10-29 21:11:10 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-apiserver ready: true, restart count 0 Oct 30 05:14:48.338: INFO: kube-scheduler-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-scheduler ready: true, restart count 2 Oct 30 05:14:48.338: INFO: dns-autoscaler-7df78bfcfb-phsdx started at 2021-10-29 21:09:02 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container autoscaler ready: true, restart count 1 Oct 30 05:14:48.338: INFO: node-feature-discovery-controller-cff799f9f-qq7g4 started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container nfd-controller ready: true, restart count 0 Oct 30 05:14:48.338: INFO: coredns-8474476ff8-wrwwv started at 2021-10-29 21:09:00 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container coredns ready: true, restart count 1 Oct 30 05:14:48.338: INFO: prometheus-operator-585ccfb458-czbr2 started at 2021-10-29 21:21:06 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.338: INFO: Container prometheus-operator ready: true, restart count 0 Oct 30 05:14:48.338: INFO: node-exporter-bv946 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.338: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.338: INFO: kube-controller-manager-master3 started at 2021-10-29 21:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 30 05:14:48.338: INFO: kube-proxy-r6fpx started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:48.338: INFO: kube-flannel-rbdlt started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Init container install-cni ready: true, restart count 2 Oct 30 05:14:48.338: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:14:48.338: INFO: kube-multus-ds-amd64-bdwh9 started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.338: INFO: Container kube-multus ready: true, restart count 1 W1030 05:14:48.350359 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 05:14:48.427: INFO: Latency metrics for node master3 Oct 30 05:14:48.427: INFO: Logging node info for node node1 Oct 30 05:14:48.430: INFO: Node Info: &Node{ObjectMeta:{node1 ddef9269-94c5-4165-81fb-a3b0c4ac5c75 171598 0 2021-10-29 21:07:27 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:29 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 05:12:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:38 +0000 UTC,LastTransitionTime:2021-10-29 21:11:38 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:43 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:43 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:43 +0000 UTC,LastTransitionTime:2021-10-29 21:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 05:14:43 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bf4179125e4495c89c046ed0ae7baf7,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:ce868148-dc5e-4c7c-a555-42ee929547f7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432289,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 05:14:48.431: INFO: Logging kubelet events for node node1 Oct 30 05:14:48.433: INFO: Logging pods the kubelet thinks is on node node1 Oct 30 05:14:48.453: INFO: rs-e2e-pts-filter-72v7q started at 2021-10-30 05:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.453: INFO: node-feature-discovery-worker-w5vdb started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:48.453: INFO: kube-flannel-phg88 started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Init container install-cni ready: true, restart count 2 Oct 30 05:14:48.453: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:14:48.453: INFO: collectd-d45rv started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 05:14:48.453: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:48.453: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:48.453: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.453: INFO: kube-proxy-z5hqt started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:48.453: INFO: cmk-init-discover-node1-n4mcc started at 2021-10-29 21:19:28 +0000 UTC (0+3 container statuses recorded) Oct 30 05:14:48.453: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:48.453: INFO: Container init ready: false, restart count 0 Oct 30 05:14:48.453: INFO: Container install ready: false, restart count 0 Oct 30 05:14:48.453: INFO: cmk-89lqq started at 2021-10-29 21:20:10 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.453: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:48.453: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:48.453: INFO: nginx-proxy-node1 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:48.453: INFO: rs-e2e-pts-filter-l2s7l started at 2021-10-30 05:14:43 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.453: INFO: kube-multus-ds-amd64-68wrz started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:48.453: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.453: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:48.453: INFO: node-exporter-256wm started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.453: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.453: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.454: INFO: prometheus-k8s-0 started at 2021-10-29 21:21:17 +0000 UTC (0+4 container statuses recorded) Oct 30 05:14:48.454: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:14:48.454: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:14:48.454: INFO: Container grafana ready: true, restart count 0 Oct 30 05:14:48.454: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:14:48.454: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.454: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 W1030 05:14:48.467370 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 05:14:48.644: INFO: Latency metrics for node node1 Oct 30 05:14:48.644: INFO: Logging node info for node node2 Oct 30 05:14:48.647: INFO: Node Info: &Node{ObjectMeta:{node2 3b49ad19-ba56-4f4a-b1fa-eef102063de9 171597 0 2021-10-29 21:07:28 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-29 21:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-29 21:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-29 21:16:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-29 21:19:53 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-30 05:12:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-29 21:11:34 +0000 UTC,LastTransitionTime:2021-10-29 21:11:34 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:47 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:47 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-30 05:14:47 +0000 UTC,LastTransitionTime:2021-10-29 21:07:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-30 05:14:47 +0000 UTC,LastTransitionTime:2021-10-29 21:08:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7283436dd9e34722a6e4df817add95ed,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:c219e7bd-582b-4d6c-b379-1161acc70676,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[localhost:30500/cmk@sha256:430843a71fa03faf488543c9f5b50d3efbef49988d6784f9f48b8077cc806f60 localhost:30500/cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724463471,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:644a70516a26004c97d0d85c7fe1d0c3a67ea8ab7ddf4aff193d9f301670cf36 nginx:latest],SizeBytes:133277153,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:2f1ff7ac170c0ac8079e232ea4ee89d23b7906d1b824d901927acb4e399c52c9 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[localhost:30500/tasextender@sha256:b7d2fa8154ac5d9cff45866e4d3d210a7d390f8576611c301a2eed2b57273227 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 30 05:14:48.648: INFO: Logging kubelet events for node node2 Oct 30 05:14:48.650: INFO: Logging pods the kubelet thinks is on node node2 Oct 30 05:14:48.670: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh started at 2021-10-29 21:24:23 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.670: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:14:48.670: INFO: rs-e2e-pts-filter-gflvz started at 2021-10-30 05:14:42 +0000 UTC (0+0 container statuses recorded) Oct 30 05:14:48.670: INFO: cmk-8bpbf started at 2021-10-29 21:20:11 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.670: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:48.670: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:48.670: INFO: node-exporter-r77s4 started at 2021-10-29 21:21:15 +0000 UTC (0+2 container statuses recorded) Oct 30 05:14:48.670: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.670: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.670: INFO: collectd-flvhl started at 2021-10-29 21:25:13 +0000 UTC (0+3 container statuses recorded) Oct 30 05:14:48.670: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:48.670: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:48.671: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.671: INFO: kube-proxy-76285 started at 2021-10-29 21:07:31 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:48.671: INFO: node-feature-discovery-worker-h6lcp started at 2021-10-29 21:15:58 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:48.671: INFO: rs-e2e-pts-filter-4k8w5 started at 2021-10-30 05:14:42 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.671: INFO: cmk-webhook-6c9d5f8578-ffk66 started at 2021-10-29 21:20:11 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:14:48.671: INFO: rs-e2e-pts-filter-fvfqj started at 2021-10-30 05:14:42 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.671: INFO: kube-flannel-f6s5v started at 2021-10-29 21:08:25 +0000 UTC (1+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Init container install-cni ready: true, restart count 2 Oct 30 05:14:48.671: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:14:48.671: INFO: kube-multus-ds-amd64-7tvbl started at 2021-10-29 21:08:34 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:48.671: INFO: cmk-init-discover-node2-2fmmt started at 2021-10-29 21:19:48 +0000 UTC (0+3 container statuses recorded) Oct 30 05:14:48.671: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:48.671: INFO: Container init ready: false, restart count 0 Oct 30 05:14:48.671: INFO: Container install ready: false, restart count 0 Oct 30 05:14:48.671: INFO: nginx-proxy-node2 started at 2021-10-29 21:07:28 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:48.671: INFO: kubernetes-dashboard-785dcbb76d-pbjjt started at 2021-10-29 21:09:04 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:14:48.671: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg started at 2021-10-29 21:17:10 +0000 UTC (0+1 container statuses recorded) Oct 30 05:14:48.671: INFO: Container kube-sriovdp ready: true, restart count 0 W1030 05:14:48.684814 22 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 30 05:14:48.840: INFO: Latency metrics for node node2 Oct 30 05:14:48.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5774" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • Failure [18.933 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Filtering /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:716 validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:734 Oct 30 05:14:48.060: Pods are not distributed as expected on node "node2" Expected : 3 to equal : 2 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:789 ------------------------------ {"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes","total":13,"completed":7,"skipped":2610,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:14:48.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:14:48.877: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:14:48.884: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:14:48.886: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:14:48.895: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:14:48.895: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:48.895: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:14:48.895: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:48.895: INFO: Container init ready: false, restart count 0 Oct 30 05:14:48.895: INFO: Container install ready: false, restart count 0 Oct 30 05:14:48.895: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:14:48.895: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:48.895: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:48.895: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:14:48.895: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:48.895: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:48.895: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:48.895: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:14:48.895: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.895: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:14:48.895: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.895: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:14:48.895: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container grafana ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:14:48.895: INFO: rs-e2e-pts-filter-72v7q from sched-pred-5774 started at 2021-10-30 05:14:43 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.895: INFO: rs-e2e-pts-filter-l2s7l from sched-pred-5774 started at 2021-10-30 05:14:43 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.895: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.895: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:14:48.912: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:14:48.912: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:14:48.912: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:14:48.912: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:14:48.912: INFO: Container discover ready: false, restart count 0 Oct 30 05:14:48.912: INFO: Container init ready: false, restart count 0 Oct 30 05:14:48.912: INFO: Container install ready: false, restart count 0 Oct 30 05:14:48.912: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:14:48.912: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:14:48.912: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:14:48.912: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:14:48.912: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:14:48.912: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:14:48.912: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:14:48.912: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:14:48.912: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:14:48.912: INFO: Container collectd ready: true, restart count 0 Oct 30 05:14:48.912: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:14:48.912: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.912: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:14:48.912: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:14:48.912: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:14:48.912: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:14:48.912: INFO: rs-e2e-pts-filter-4k8w5 from sched-pred-5774 started at 2021-10-30 05:14:42 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.912: INFO: rs-e2e-pts-filter-fvfqj from sched-pred-5774 started at 2021-10-30 05:14:42 +0000 UTC (1 container statuses recorded) Oct 30 05:14:48.912: INFO: Container e2e-pts-filter ready: true, restart count 0 Oct 30 05:14:48.912: INFO: rs-e2e-pts-filter-gflvz from sched-pred-5774 started at 2021-10-30 05:14:42 +0000 UTC (0 container statuses recorded) [BeforeEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:214 STEP: Add RuntimeClass and fake resource STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. [It] verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 STEP: Starting Pod to consume most of the node's resource. STEP: Creating another pod that requires unavailable amount of resources. STEP: Considering event: Type = [Warning], Name = [filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481.16b2b70fc1974954], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481.16b2b710d0848dea], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3864/filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481 to node2] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481.16b2b71126ceab9a], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481.16b2b711381b8ae7], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 290.24462ms] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481.16b2b7113e97fad0], Reason = [Created], Message = [Created container filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481] STEP: Considering event: Type = [Normal], Name = [filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481.16b2b7114595842d], Reason = [Started], Message = [Started container filler-pod-f0d5b781-707d-46fd-961a-97a67e2e2481] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b70ed0e06b09], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3864/without-label to node2] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b70f25e9addf], Reason = [Pulling], Message = [Pulling image "k8s.gcr.io/pause:3.4.1"] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b70f38eb5065], Reason = [Pulled], Message = [Successfully pulled image "k8s.gcr.io/pause:3.4.1" in 318.865307ms] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b70f3f86d7dc], Reason = [Created], Message = [Created container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b70f46a2146e], Reason = [Started], Message = [Started container without-label] STEP: Considering event: Type = [Normal], Name = [without-label.16b2b70fc0132182], Reason = [Killing], Message = [Stopping container without-label] STEP: Considering event: Type = [Warning], Name = [additional-pod815402a1-fe09-45a8-a11f-9e1f5f339ec6.16b2b7119fb388e7], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 Insufficient example.com/beardsecond, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:249 STEP: Remove fake resource and RuntimeClass [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:15:02.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3864" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:13.182 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates pod overhead is considered along with resource limits of pods that are allowed to run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:209 verify pod overhead is accounted for /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:269 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for","total":13,"completed":8,"skipped":2967,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:15:02.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:15:02.063: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:16:02.113: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:16:02.116: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:16:02.135: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:16:02.135: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:16:02.135: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:16:02.135: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:16:02.152: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:16:02.152: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:16:02.152: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.152: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:16:02.152: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [It] Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 Oct 30 05:16:02.167: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:16:02.167: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:16:02.167: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:16:02.167: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:16:02.167: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:16:02.184: INFO: Waiting for running... Oct 30 05:16:02.184: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:16:07.251: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:16:07.251: INFO: Node: node1, totalRequestedMemResource: 1161655398400, memAllocatableVal: 178884632576, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:16:07.251: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.251: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Pod for on the node: 38f7aa8f-63ba-449b-8055-70269029fed8-0, Cpu: 38400, Mem: 89350041600 Oct 30 05:16:07.252: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:16:07.252: INFO: Node: node2, totalRequestedMemResource: 1251005440000, memAllocatableVal: 178884628480, memFraction: 1 STEP: Trying to apply 10 (tolerable) taints on the first node. STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a5bea09a-47f5-4d4c-b2d4=testing-taint-value-73cd1a94-d6e7-43a1-9ce4-8e7a314b1621:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2e1d71b2-2181-4a9f-bb86=testing-taint-value-1124f532-f10a-47a3-a469-3f43da018a2b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-691ebe89-2ea6-4e4f-912a=testing-taint-value-687ec858-ad96-4aa5-9933-3936a2a234a7:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-4e567256-3a30-4832-b3c9=testing-taint-value-3e359c39-35b2-4c58-a6ef-5ead7a4f2578:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-ccef79d2-7673-45af-a33f=testing-taint-value-86cfc41e-a3d4-4d59-af71-472291623e2f:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-a872877c-b6e6-4633-8441=testing-taint-value-1483ff2f-ea66-407e-928d-6187135af7f4:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5248dc94-9432-4df3-9a2e=testing-taint-value-cf9bdc53-c64f-4d55-9e73-c3fef1717b0b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-42ca62d6-a0c9-4f67-b2b6=testing-taint-value-4737f8cf-eb70-48f8-9e2d-83f02d61dcfa:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-68dc8ea2-d3c9-4657-adae=testing-taint-value-d17cf076-2bac-4505-b19e-3d35496d3e7b:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2325a345-8bc8-4290-86f2=testing-taint-value-6de5955e-e6cb-4e5f-befb-98ac2426aec3:PreferNoSchedule STEP: Adding 10 intolerable taints to all other nodes STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9f12f4cf-f095-4297-a023=testing-taint-value-b7b8c3fd-4681-4687-8a04-ca1439faa1ea:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-dcde1141-3363-4a80-9fd7=testing-taint-value-9a05a5b5-e82c-410b-8b74-4080922071a5:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-bab479e6-68a8-4883-8173=testing-taint-value-2addcb28-961b-440b-88ee-ffca2572d4b8:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-9e9bca03-7296-47c8-97f3=testing-taint-value-702d601e-419c-4e2d-87fc-26ba87c5cacd:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-277b22d2-46e1-4214-8b59=testing-taint-value-7669026c-e4c7-4b8f-b8c4-645865018382:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-2c4474bd-febd-4a7b-ad7e=testing-taint-value-e86a3ab6-6613-4872-b0eb-5405f763dbe6:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-c0531a7b-1644-4e8e-95e2=testing-taint-value-ad3d8631-d6b6-4c53-bfe7-bd13e708cae2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-0ef72d2f-6adb-4981-abc8=testing-taint-value-e7fe4759-f9e5-4c32-b81c-5f32d799700a:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-84323660-aae9-4b1f-bbe2=testing-taint-value-37a2e529-b4af-4b89-a526-d9a3d9f16cb2:PreferNoSchedule STEP: verifying the node has the taint kubernetes.io/e2e-scheduling-priorities-5e945f27-e0ff-424a-9733=testing-taint-value-6bf856f6-ee43-4610-b079-72b6a7353215:PreferNoSchedule STEP: Create a pod that tolerates all the taints of the first node. STEP: Pod should prefer scheduled to the node that pod can tolerate. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9f12f4cf-f095-4297-a023=testing-taint-value-b7b8c3fd-4681-4687-8a04-ca1439faa1ea:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-dcde1141-3363-4a80-9fd7=testing-taint-value-9a05a5b5-e82c-410b-8b74-4080922071a5:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-bab479e6-68a8-4883-8173=testing-taint-value-2addcb28-961b-440b-88ee-ffca2572d4b8:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-9e9bca03-7296-47c8-97f3=testing-taint-value-702d601e-419c-4e2d-87fc-26ba87c5cacd:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-277b22d2-46e1-4214-8b59=testing-taint-value-7669026c-e4c7-4b8f-b8c4-645865018382:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2c4474bd-febd-4a7b-ad7e=testing-taint-value-e86a3ab6-6613-4872-b0eb-5405f763dbe6:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-c0531a7b-1644-4e8e-95e2=testing-taint-value-ad3d8631-d6b6-4c53-bfe7-bd13e708cae2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-0ef72d2f-6adb-4981-abc8=testing-taint-value-e7fe4759-f9e5-4c32-b81c-5f32d799700a:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-84323660-aae9-4b1f-bbe2=testing-taint-value-37a2e529-b4af-4b89-a526-d9a3d9f16cb2:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5e945f27-e0ff-424a-9733=testing-taint-value-6bf856f6-ee43-4610-b079-72b6a7353215:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a5bea09a-47f5-4d4c-b2d4=testing-taint-value-73cd1a94-d6e7-43a1-9ce4-8e7a314b1621:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2e1d71b2-2181-4a9f-bb86=testing-taint-value-1124f532-f10a-47a3-a469-3f43da018a2b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-691ebe89-2ea6-4e4f-912a=testing-taint-value-687ec858-ad96-4aa5-9933-3936a2a234a7:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-4e567256-3a30-4832-b3c9=testing-taint-value-3e359c39-35b2-4c58-a6ef-5ead7a4f2578:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-ccef79d2-7673-45af-a33f=testing-taint-value-86cfc41e-a3d4-4d59-af71-472291623e2f:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-a872877c-b6e6-4633-8441=testing-taint-value-1483ff2f-ea66-407e-928d-6187135af7f4:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-5248dc94-9432-4df3-9a2e=testing-taint-value-cf9bdc53-c64f-4d55-9e73-c3fef1717b0b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-42ca62d6-a0c9-4f67-b2b6=testing-taint-value-4737f8cf-eb70-48f8-9e2d-83f02d61dcfa:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-68dc8ea2-d3c9-4657-adae=testing-taint-value-d17cf076-2bac-4505-b19e-3d35496d3e7b:PreferNoSchedule STEP: verifying the node doesn't have the taint kubernetes.io/e2e-scheduling-priorities-2325a345-8bc8-4290-86f2=testing-taint-value-6de5955e-e6cb-4e5f-befb-98ac2426aec3:PreferNoSchedule [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:16:24.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-2464" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:82.571 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 Pod should be preferably scheduled to nodes pod can tolerate /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:329 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate","total":13,"completed":9,"skipped":3507,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:16:24.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-priority STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:156 Oct 30 05:16:24.646: INFO: Waiting up to 1m0s for all nodes to be ready Oct 30 05:17:24.698: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:17:24.700: INFO: Waiting up to 5m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 30 05:17:24.720: INFO: The status of Pod cmk-init-discover-node1-n4mcc is Succeeded, skipping waiting Oct 30 05:17:24.720: INFO: The status of Pod cmk-init-discover-node2-2fmmt is Succeeded, skipping waiting Oct 30 05:17:24.720: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 30 05:17:24.720: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 30 05:17:24.735: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:17:24.735: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:17:24.735: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:24.735: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:17:24.735: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 [BeforeEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:392 STEP: Trying to get 2 available nodes which can run pod STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Apply dedicated topologyKey kubernetes.io/e2e-pts-score for this test on the 2 nodes. [It] validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 Oct 30 05:17:32.831: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Node: node2, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:17:32.831: INFO: Node: node2, totalRequestedMemResource: 104857600, memAllocatableVal: 178884628480, memFraction: 0.0005861744571961558 Oct 30 05:17:32.831: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.831: INFO: Pod for on the node: tas-telemetry-aware-scheduling-84ff454dfb-989mh, Cpu: 100, Mem: 209715200 Oct 30 05:17:32.832: INFO: Node: node1, totalRequestedCPUResource: 100, cpuAllocatableMil: 77000, cpuFraction: 0.0012987012987012987 Oct 30 05:17:32.832: INFO: Node: node1, totalRequestedMemResource: 104857600, memAllocatableVal: 178884632576, memFraction: 0.0005861744437742619 Oct 30 05:17:32.842: INFO: Waiting for running... Oct 30 05:17:32.847: INFO: Waiting for running... STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:17:37.923: INFO: ComputeCPUMemFraction for node: node2 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Node: node2, totalRequestedCPUResource: 537700, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:17:37.923: INFO: Node: node2, totalRequestedMemResource: 1251005411328, memAllocatableVal: 178884628480, memFraction: 1 STEP: Compute Cpu, Mem Fraction after create balanced pods. Oct 30 05:17:37.923: INFO: ComputeCPUMemFraction for node: node1 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Pod for on the node: aec2b95a-4044-4eab-9fd4-c4661458fa0d-0, Cpu: 38400, Mem: 89350039552 Oct 30 05:17:37.923: INFO: Node: node1, totalRequestedCPUResource: 499300, cpuAllocatableMil: 77000, cpuFraction: 1 Oct 30 05:17:37.923: INFO: Node: node1, totalRequestedMemResource: 1161655371776, memAllocatableVal: 178884632576, memFraction: 1 STEP: Run a ReplicaSet with 4 replicas on node "node2" STEP: Verifying if the test-pod lands on node "node1" [AfterEach] PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:400 STEP: removing the label kubernetes.io/e2e-pts-score off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score STEP: removing the label kubernetes.io/e2e-pts-score off the node node1 STEP: verifying the node doesn't have the label kubernetes.io/e2e-pts-score [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:18:03.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-priority-552" for this suite. [AfterEach] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:153 • [SLOW TEST:99.382 seconds] [sig-scheduling] SchedulerPriorities [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PodTopologySpread Scoring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:388 validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/priorities.go:406 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed","total":13,"completed":10,"skipped":4258,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:18:04.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:18:04.036: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:18:04.045: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:18:04.047: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:18:04.057: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:18:04.057: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:18:04.057: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:18:04.057: INFO: Container discover ready: false, restart count 0 Oct 30 05:18:04.057: INFO: Container init ready: false, restart count 0 Oct 30 05:18:04.057: INFO: Container install ready: false, restart count 0 Oct 30 05:18:04.057: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:18:04.057: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:18:04.057: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:18:04.057: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:18:04.057: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:18:04.057: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:18:04.057: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:18:04.057: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:18:04.057: INFO: Container collectd ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:18:04.057: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:18:04.057: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:18:04.057: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:18:04.057: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container grafana ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:18:04.057: INFO: test-pod from sched-priority-552 started at 2021-10-30 05:17:45 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.057: INFO: Container test-pod ready: true, restart count 0 Oct 30 05:18:04.057: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:18:04.073: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:18:04.073: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:18:04.073: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:18:04.073: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:18:04.073: INFO: Container discover ready: false, restart count 0 Oct 30 05:18:04.073: INFO: Container init ready: false, restart count 0 Oct 30 05:18:04.073: INFO: Container install ready: false, restart count 0 Oct 30 05:18:04.073: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:18:04.073: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:18:04.073: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:18:04.073: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:18:04.073: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:18:04.073: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:18:04.073: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:18:04.073: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:18:04.073: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:18:04.073: INFO: Container collectd ready: true, restart count 0 Oct 30 05:18:04.073: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:18:04.073: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:18:04.073: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:18:04.073: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:18:04.073: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:18:04.073: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:18:04.073: INFO: rs-e2e-pts-score-dgh2s from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 30 05:18:04.073: INFO: rs-e2e-pts-score-dkp27 from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 30 05:18:04.073: INFO: rs-e2e-pts-score-qcr96 from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 30 05:18:04.073: INFO: rs-e2e-pts-score-x84bn from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:04.073: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that NodeAffinity is respected if not matching /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:487 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16b2b73c4108eeca], Reason = [FailedScheduling], Message = [0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:18:05.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1581" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching","total":13,"completed":11,"skipped":4307,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 30 05:18:05.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 30 05:18:05.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 30 05:18:05.155: INFO: Waiting for terminating namespaces to be deleted... Oct 30 05:18:05.157: INFO: Logging pods the apiserver thinks is on node node1 before test Oct 30 05:18:05.164: INFO: cmk-89lqq from kube-system started at 2021-10-29 21:20:10 +0000 UTC (2 container statuses recorded) Oct 30 05:18:05.164: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:18:05.164: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:18:05.164: INFO: cmk-init-discover-node1-n4mcc from kube-system started at 2021-10-29 21:19:28 +0000 UTC (3 container statuses recorded) Oct 30 05:18:05.164: INFO: Container discover ready: false, restart count 0 Oct 30 05:18:05.164: INFO: Container init ready: false, restart count 0 Oct 30 05:18:05.164: INFO: Container install ready: false, restart count 0 Oct 30 05:18:05.164: INFO: kube-flannel-phg88 from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container kube-flannel ready: true, restart count 2 Oct 30 05:18:05.165: INFO: kube-multus-ds-amd64-68wrz from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:18:05.165: INFO: kube-proxy-z5hqt from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:18:05.165: INFO: kubernetes-metrics-scraper-5558854cb-5rmjw from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 30 05:18:05.165: INFO: nginx-proxy-node1 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:18:05.165: INFO: node-feature-discovery-worker-w5vdb from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:18:05.165: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-t789r from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:18:05.165: INFO: collectd-d45rv from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:18:05.165: INFO: Container collectd ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:18:05.165: INFO: node-exporter-256wm from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:18:05.165: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:18:05.165: INFO: prometheus-k8s-0 from monitoring started at 2021-10-29 21:21:17 +0000 UTC (4 container statuses recorded) Oct 30 05:18:05.165: INFO: Container config-reloader ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Container grafana ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Container prometheus ready: true, restart count 1 Oct 30 05:18:05.165: INFO: test-pod from sched-priority-552 started at 2021-10-30 05:17:45 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.165: INFO: Container test-pod ready: true, restart count 0 Oct 30 05:18:05.165: INFO: Logging pods the apiserver thinks is on node node2 before test Oct 30 05:18:05.174: INFO: cmk-8bpbf from kube-system started at 2021-10-29 21:20:11 +0000 UTC (2 container statuses recorded) Oct 30 05:18:05.174: INFO: Container nodereport ready: true, restart count 0 Oct 30 05:18:05.174: INFO: Container reconcile ready: true, restart count 0 Oct 30 05:18:05.174: INFO: cmk-init-discover-node2-2fmmt from kube-system started at 2021-10-29 21:19:48 +0000 UTC (3 container statuses recorded) Oct 30 05:18:05.174: INFO: Container discover ready: false, restart count 0 Oct 30 05:18:05.174: INFO: Container init ready: false, restart count 0 Oct 30 05:18:05.174: INFO: Container install ready: false, restart count 0 Oct 30 05:18:05.174: INFO: cmk-webhook-6c9d5f8578-ffk66 from kube-system started at 2021-10-29 21:20:11 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container cmk-webhook ready: true, restart count 0 Oct 30 05:18:05.174: INFO: kube-flannel-f6s5v from kube-system started at 2021-10-29 21:08:25 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container kube-flannel ready: true, restart count 3 Oct 30 05:18:05.174: INFO: kube-multus-ds-amd64-7tvbl from kube-system started at 2021-10-29 21:08:34 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container kube-multus ready: true, restart count 1 Oct 30 05:18:05.174: INFO: kube-proxy-76285 from kube-system started at 2021-10-29 21:07:31 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container kube-proxy ready: true, restart count 1 Oct 30 05:18:05.174: INFO: kubernetes-dashboard-785dcbb76d-pbjjt from kube-system started at 2021-10-29 21:09:04 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 30 05:18:05.174: INFO: nginx-proxy-node2 from kube-system started at 2021-10-29 21:07:28 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container nginx-proxy ready: true, restart count 2 Oct 30 05:18:05.174: INFO: node-feature-discovery-worker-h6lcp from kube-system started at 2021-10-29 21:15:58 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container nfd-worker ready: true, restart count 0 Oct 30 05:18:05.174: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-69pkg from kube-system started at 2021-10-29 21:17:10 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 30 05:18:05.174: INFO: collectd-flvhl from monitoring started at 2021-10-29 21:25:13 +0000 UTC (3 container statuses recorded) Oct 30 05:18:05.174: INFO: Container collectd ready: true, restart count 0 Oct 30 05:18:05.174: INFO: Container collectd-exporter ready: true, restart count 0 Oct 30 05:18:05.174: INFO: Container rbac-proxy ready: true, restart count 0 Oct 30 05:18:05.174: INFO: node-exporter-r77s4 from monitoring started at 2021-10-29 21:21:15 +0000 UTC (2 container statuses recorded) Oct 30 05:18:05.174: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 30 05:18:05.174: INFO: Container node-exporter ready: true, restart count 0 Oct 30 05:18:05.174: INFO: tas-telemetry-aware-scheduling-84ff454dfb-989mh from monitoring started at 2021-10-29 21:24:23 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container tas-extender ready: true, restart count 0 Oct 30 05:18:05.174: INFO: rs-e2e-pts-score-dgh2s from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.174: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 30 05:18:05.175: INFO: rs-e2e-pts-score-dkp27 from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.175: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 30 05:18:05.175: INFO: rs-e2e-pts-score-qcr96 from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.175: INFO: Container e2e-pts-score ready: true, restart count 0 Oct 30 05:18:05.175: INFO: rs-e2e-pts-score-x84bn from sched-priority-552 started at 2021-10-30 05:17:37 +0000 UTC (1 container statuses recorded) Oct 30 05:18:05.175: INFO: Container e2e-pts-score ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-3a9d78a2-0bca-4f59-95ee-974ed6375887 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-3a9d78a2-0bca-4f59-95ee-974ed6375887 off the node node2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-3a9d78a2-0bca-4f59-95ee-974ed6375887 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 30 05:18:23.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-157" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:18.166 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:654 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol","total":13,"completed":12,"skipped":4680,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSOct 30 05:18:23.304: INFO: Running AfterSuite actions on all nodes Oct 30 05:18:23.304: INFO: Running AfterSuite actions on node 1 Oct 30 05:18:23.304: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_scheduling_serial/junit_01.xml {"msg":"Test Suite completed","total":13,"completed":12,"skipped":5757,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes"]} Summarizing 1 Failure: [Fail] [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering [It] validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:789 Ran 13 of 5770 Specs in 557.201 seconds FAIL! -- 12 Passed | 1 Failed | 0 Pending | 5757 Skipped --- FAIL: TestE2E (557.24s) FAIL Ginkgo ran 1 suite in 9m18.489070091s Test Suite Failed