Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651274598 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes Apr 29 23:23:19.902: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:19.907: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 29 23:23:19.935: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 29 23:23:19.988: INFO: The status of Pod cmk-init-discover-node1-gxlbt is Succeeded, skipping waiting Apr 29 23:23:19.988: INFO: The status of Pod cmk-init-discover-node2-csdn7 is Succeeded, skipping waiting Apr 29 23:23:19.988: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 29 23:23:19.988: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Apr 29 23:23:19.988: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 29 23:23:20.011: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Apr 29 23:23:20.011: INFO: e2e test version: v1.21.9 Apr 29 23:23:20.012: INFO: kube-apiserver version: v1.21.1 Apr 29 23:23:20.012: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.019: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Apr 29 23:23:20.013: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.035: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ Apr 29 23:23:20.019: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.042: INFO: Cluster IP family: ipv4 Apr 29 23:23:20.021: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.043: INFO: Cluster IP family: ipv4 SS ------------------------------ Apr 29 23:23:20.022: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.044: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Apr 29 23:23:20.031: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.051: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ Apr 29 23:23:20.032: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.055: INFO: Cluster IP family: ipv4 SSS ------------------------------ Apr 29 23:23:20.035: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.056: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 29 23:23:20.052: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.073: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Apr 29 23:23:20.058: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:20.079: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0429 23:23:20.269655 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.269: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.271: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:20.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2391" for this suite. •SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector W0429 23:23:20.317389 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.317: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.319: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Apr 29 23:23:20.322: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:20.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-6721" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.038 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0429 23:23:20.337558 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.337: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.339: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:20.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-7587" for this suite. •SSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Apr 29 23:23:20.505: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:20.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-9266" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.027 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0429 23:23:20.390848 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.391: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.394: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:25.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3114" for this suite. • [SLOW TEST:5.073 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":1,"skipped":41,"failed":0} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Apr 29 23:23:20.321: INFO: Waiting up to 5m0s for pod "downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd" in namespace "downward-api-3433" to be "Succeeded or Failed" Apr 29 23:23:20.323: INFO: Pod "downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.720174ms Apr 29 23:23:22.327: INFO: Pod "downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005923686s Apr 29 23:23:24.331: INFO: Pod "downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010102745s Apr 29 23:23:26.336: INFO: Pod "downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014318475s STEP: Saw pod success Apr 29 23:23:26.336: INFO: Pod "downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd" satisfied condition "Succeeded or Failed" Apr 29 23:23:26.338: INFO: Trying to get logs from node node1 pod downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd container dapi-container: STEP: delete the pod Apr 29 23:23:26.349: INFO: Waiting for pod downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd to disappear Apr 29 23:23:26.351: INFO: Pod downward-api-8ee976b1-c852-4959-8a63-ab3b9042b0dd no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:26.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3433" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":2,"skipped":41,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0429 23:23:20.315206 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.315: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.317: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 29 23:23:20.330: INFO: Waiting up to 5m0s for pod "security-context-04997506-8893-41c7-bdd5-691106447382" in namespace "security-context-4077" to be "Succeeded or Failed" Apr 29 23:23:20.333: INFO: Pod "security-context-04997506-8893-41c7-bdd5-691106447382": Phase="Pending", Reason="", readiness=false. Elapsed: 3.021806ms Apr 29 23:23:22.338: INFO: Pod "security-context-04997506-8893-41c7-bdd5-691106447382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008060981s Apr 29 23:23:24.342: INFO: Pod "security-context-04997506-8893-41c7-bdd5-691106447382": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011339405s Apr 29 23:23:26.345: INFO: Pod "security-context-04997506-8893-41c7-bdd5-691106447382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01429485s STEP: Saw pod success Apr 29 23:23:26.345: INFO: Pod "security-context-04997506-8893-41c7-bdd5-691106447382" satisfied condition "Succeeded or Failed" Apr 29 23:23:26.347: INFO: Trying to get logs from node node1 pod security-context-04997506-8893-41c7-bdd5-691106447382 container test-container: STEP: delete the pod Apr 29 23:23:26.360: INFO: Waiting for pod security-context-04997506-8893-41c7-bdd5-691106447382 to disappear Apr 29 23:23:26.362: INFO: Pod security-context-04997506-8893-41c7-bdd5-691106447382 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4077" for this suite. • [SLOW TEST:6.075 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0429 23:23:20.184358 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.184: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.187: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Apr 29 23:23:20.202: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3572" to be "Succeeded or Failed" Apr 29 23:23:20.204: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201921ms Apr 29 23:23:22.208: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006082413s Apr 29 23:23:24.212: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010440756s Apr 29 23:23:26.218: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015949691s Apr 29 23:23:26.218: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:26.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3572" for this suite. • [SLOW TEST:6.315 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0429 23:23:20.515852 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.516: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.517: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Apr 29 23:23:20.529: INFO: Waiting up to 5m0s for pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a" in namespace "security-context-test-4063" to be "Succeeded or Failed" Apr 29 23:23:20.532: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390122ms Apr 29 23:23:22.537: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007478783s Apr 29 23:23:24.540: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010998668s Apr 29 23:23:26.545: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015287472s Apr 29 23:23:28.548: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018827361s Apr 29 23:23:30.551: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022116147s Apr 29 23:23:32.556: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.026293377s Apr 29 23:23:34.558: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.029094058s Apr 29 23:23:34.558: INFO: Pod "busybox-user-0-31530cb4-9740-4346-8b5e-f14e944b317a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:34.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4063" for this suite. • [SLOW TEST:14.072 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":140,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0429 23:23:20.637140 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.637: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.639: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Apr 29 23:23:20.651: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80" in namespace "security-context-test-1632" to be "Succeeded or Failed" Apr 29 23:23:20.653: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263587ms Apr 29 23:23:22.656: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005013262s Apr 29 23:23:24.660: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009141273s Apr 29 23:23:26.665: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014480821s Apr 29 23:23:28.669: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017899975s Apr 29 23:23:30.675: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023980039s Apr 29 23:23:32.680: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028722211s Apr 29 23:23:34.684: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80": Phase="Failed", Reason="", readiness=false. Elapsed: 14.032988089s Apr 29 23:23:34.684: INFO: Pod "busybox-readonly-true-48b72e8c-24f5-43b0-a280-f81e03b69b80" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:34.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1632" for this suite. • [SLOW TEST:14.074 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":219,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Apr 29 23:23:20.735: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Apr 29 23:23:20.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3075 create -f -' Apr 29 23:23:21.292: INFO: stderr: "" Apr 29 23:23:21.292: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Apr 29 23:23:35.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3075 logs dapi-test-pod test-container' Apr 29 23:23:35.479: INFO: stderr: "" Apr 29 23:23:35.479: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3075\nMY_POD_IP=10.244.4.172\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Apr 29 23:23:35.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3075 logs dapi-test-pod test-container' Apr 29 23:23:35.655: INFO: stderr: "" Apr 29 23:23:35.655: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3075\nMY_POD_IP=10.244.4.172\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:35.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3075" for this suite. • [SLOW TEST:14.963 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":1,"skipped":227,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:26.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Apr 29 23:23:26.531: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:28.534: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:30.535: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:32.537: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:34.537: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:36.536: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:38.534: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:40.533: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:42.537: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:44.535: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Apr 29 23:23:44.537: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-9746 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:23:44.537: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:23:44.694: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-9746 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:23:44.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Apr 29 23:23:44.800: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-9746 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:23:44.800: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:44.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-9746" for this suite. • [SLOW TEST:18.417 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0429 23:23:20.204948 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.205: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.207: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-03ede941-5faf-4890-ad45-dc872b6ee391 in namespace container-probe-1159 Apr 29 23:23:26.232: INFO: Started pod liveness-03ede941-5faf-4890-ad45-dc872b6ee391 in namespace container-probe-1159 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:23:26.235: INFO: Initial restart count of pod liveness-03ede941-5faf-4890-ad45-dc872b6ee391 is 0 Apr 29 23:23:46.293: INFO: Restart count of pod container-probe-1159/liveness-03ede941-5faf-4890-ad45-dc872b6ee391 is now 1 (20.058232165s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:46.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1159" for this suite. • [SLOW TEST:26.128 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:47.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:50.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2055" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:50.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Apr 29 23:23:50.290: INFO: Waiting up to 5m0s for pod "security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0" in namespace "security-context-6173" to be "Succeeded or Failed" Apr 29 23:23:50.292: INFO: Pod "security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186636ms Apr 29 23:23:52.296: INFO: Pod "security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005807378s Apr 29 23:23:54.302: INFO: Pod "security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011988918s STEP: Saw pod success Apr 29 23:23:54.302: INFO: Pod "security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0" satisfied condition "Succeeded or Failed" Apr 29 23:23:54.304: INFO: Trying to get logs from node node2 pod security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0 container test-container: STEP: delete the pod Apr 29 23:23:54.317: INFO: Waiting for pod security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0 to disappear Apr 29 23:23:54.319: INFO: Pod security-context-21bf2684-a04d-4450-9ddb-dec08cd15ff0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:54.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6173" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":3,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:26.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:23:56.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4831" for this suite. • [SLOW TEST:30.072 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":2,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:54.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:03.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8128" for this suite. • [SLOW TEST:9.087 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":596,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:25.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 in namespace kubelet-9190 I0429 23:23:25.679311 31 runners.go:190] Created replication controller with name: cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025, namespace: kubelet-9190, replica count: 20 I0429 23:23:35.730813 31 runners.go:190] cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 23:23:45.732190 31 runners.go:190] cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 Pods: 20 out of 20 created, 18 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0429 23:23:55.733478 31 runners.go:190] cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 29 23:23:56.734: INFO: Checking pods on node node2 via /runningpods endpoint Apr 29 23:23:56.734: INFO: Checking pods on node node1 via /runningpods endpoint Apr 29 23:23:56.799: INFO: Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.483 3759.48 1701.46 "runtime" 0.117 626.97 285.59 "kubelet" 0.117 626.97 285.59 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.263 3392.73 1416.38 "runtime" 0.101 491.43 219.15 "kubelet" 0.101 491.43 219.15 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.688 2561.54 583.75 "kubelet" 0.688 2561.54 583.75 "/" 1.688 6213.34 2235.38 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.720 3994.39 1156.34 "runtime" 0.890 1489.55 530.35 "kubelet" 0.890 1489.55 530.35 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "runtime" 0.134 699.75 289.27 "kubelet" 0.134 699.75 289.27 "/" 0.523 5051.61 1781.51 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 in namespace kubelet-9190, will wait for the garbage collector to delete the pods Apr 29 23:23:56.858: INFO: Deleting ReplicationController cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 took: 4.072875ms Apr 29 23:23:57.459: INFO: Terminating ReplicationController cleanup20-7153c480-2bf8-46e3-90d9-616d60b84025 pods took: 600.377646ms Apr 29 23:24:10.461: INFO: Checking pods on node node2 via /runningpods endpoint Apr 29 23:24:10.461: INFO: Checking pods on node node1 via /runningpods endpoint Apr 29 23:24:10.478: INFO: Deleting 20 pods on 2 nodes completed in 1.017934488s after the RC was deleted Apr 29 23:24:10.478: INFO: CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.443 0.443 0.483 0.483 0.483 "runtime" 0.000 0.000 0.112 0.117 0.117 0.117 0.117 "kubelet" 0.000 0.000 0.112 0.117 0.117 0.117 0.117 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.283 0.283 0.306 0.306 0.306 "runtime" 0.000 0.000 0.075 0.084 0.084 0.084 0.084 "kubelet" 0.000 0.000 0.075 0.084 0.084 0.084 0.084 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.020 1.688 1.688 1.688 1.688 "runtime" 0.000 0.000 0.507 0.640 0.640 0.640 0.640 "kubelet" 0.000 0.000 0.507 0.640 0.640 0.640 0.640 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.646 1.646 1.907 1.907 1.907 "runtime" 0.000 0.000 0.811 0.811 0.811 0.811 0.811 "kubelet" 0.000 0.000 0.811 0.811 0.811 0.811 0.811 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.521 0.521 0.523 0.523 0.523 "runtime" 0.000 0.000 0.113 0.134 0.134 0.134 0.134 "kubelet" 0.000 0.000 0.113 0.134 0.134 0.134 0.134 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:10.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-9190" for this suite. • [SLOW TEST:44.883 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":2,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:10.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-1657/configmap-test-1c800d3e-119b-4596-bf76-8a6c87267447 STEP: Updating configMap configmap-1657/configmap-test-1c800d3e-119b-4596-bf76-8a6c87267447 STEP: Verifying update of ConfigMap configmap-1657/configmap-test-1c800d3e-119b-4596-bf76-8a6c87267447 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:10.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1657" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":3,"skipped":242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:57.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-f469de4c-93d0-4998-bd43-867e10df566a bar STEP: verifying the node has the label fizz-4288c898-35c0-4de9-85da-1731e064b583 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-4288c898-35c0-4de9-85da-1731e064b583 off the node node2 STEP: verifying the node doesn't have the label fizz-4288c898-35c0-4de9-85da-1731e064b583 STEP: removing the label foo-f469de4c-93d0-4998-bd43-867e10df566a off the node node2 STEP: verifying the node doesn't have the label foo-f469de4c-93d0-4998-bd43-867e10df566a [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:13.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-1943" for this suite. • [SLOW TEST:16.118 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":3,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0429 23:23:20.318057 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 29 23:23:20.318: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Apr 29 23:23:20.320: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-07b47ba4-b999-4e51-9a94-69f0938080c3 in namespace container-probe-5238 Apr 29 23:23:26.341: INFO: Started pod busybox-07b47ba4-b999-4e51-9a94-69f0938080c3 in namespace container-probe-5238 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:23:26.343: INFO: Initial restart count of pod busybox-07b47ba4-b999-4e51-9a94-69f0938080c3 is 0 Apr 29 23:24:14.461: INFO: Restart count of pod container-probe-5238/busybox-07b47ba4-b999-4e51-9a94-69f0938080c3 is now 1 (48.117930806s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:14.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5238" for this suite. • [SLOW TEST:54.181 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":1,"skipped":62,"failed":0} [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:14.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Apr 29 23:24:14.500: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:14.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-8937" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:10.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:16.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-417" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:14.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:19.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6195" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":306,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:13.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-144c8ebd-9957-4c51-90f7-566b28f44ec4 in namespace container-probe-9808 Apr 29 23:24:17.365: INFO: Started pod liveness-override-144c8ebd-9957-4c51-90f7-566b28f44ec4 in namespace container-probe-9808 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:24:17.367: INFO: Initial restart count of pod liveness-override-144c8ebd-9957-4c51-90f7-566b28f44ec4 is 0 Apr 29 23:24:19.373: INFO: Restart count of pod container-probe-9808/liveness-override-144c8ebd-9957-4c51-90f7-566b28f44ec4 is now 1 (2.005611736s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:19.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9808" for this suite. • [SLOW TEST:6.064 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":4,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:17.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Apr 29 23:24:17.357: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-b5e22474-c5c0-4762-b70a-8b6d8066b336" in namespace "security-context-test-8577" to be "Succeeded or Failed" Apr 29 23:24:17.358: INFO: Pod "busybox-privileged-true-b5e22474-c5c0-4762-b70a-8b6d8066b336": Phase="Pending", Reason="", readiness=false. Elapsed: 1.841326ms Apr 29 23:24:19.364: INFO: Pod "busybox-privileged-true-b5e22474-c5c0-4762-b70a-8b6d8066b336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007374181s Apr 29 23:24:21.369: INFO: Pod "busybox-privileged-true-b5e22474-c5c0-4762-b70a-8b6d8066b336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01274031s Apr 29 23:24:21.369: INFO: Pod "busybox-privileged-true-b5e22474-c5c0-4762-b70a-8b6d8066b336" satisfied condition "Succeeded or Failed" Apr 29 23:24:21.374: INFO: Got logs for pod "busybox-privileged-true-b5e22474-c5c0-4762-b70a-8b6d8066b336": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:21.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8577" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":5,"skipped":556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:35.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-6a29b1d8-9a24-4e49-9d70-105a43af348f in namespace container-probe-5910 Apr 29 23:23:45.958: INFO: Started pod busybox-6a29b1d8-9a24-4e49-9d70-105a43af348f in namespace container-probe-5910 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:23:45.961: INFO: Initial restart count of pod busybox-6a29b1d8-9a24-4e49-9d70-105a43af348f is 0 Apr 29 23:24:30.062: INFO: Restart count of pod container-probe-5910/busybox-6a29b1d8-9a24-4e49-9d70-105a43af348f is now 1 (44.10055901s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:30.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5910" for this suite. • [SLOW TEST:54.163 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":2,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:20.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-5326b1ce-a86e-4151-8115-b1714c23f7d6 in namespace container-probe-5372 Apr 29 23:23:34.795: INFO: Started pod startup-5326b1ce-a86e-4151-8115-b1714c23f7d6 in namespace container-probe-5372 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:23:34.797: INFO: Initial restart count of pod startup-5326b1ce-a86e-4151-8115-b1714c23f7d6 is 0 Apr 29 23:24:32.987: INFO: Restart count of pod container-probe-5372/startup-5326b1ce-a86e-4151-8115-b1714c23f7d6 is now 1 (58.189387389s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:32.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5372" for this suite. • [SLOW TEST:72.259 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":2,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:30.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-08cbe066-c701-49f6-a599-a453e9956d48 in namespace container-probe-1703 Apr 29 23:24:34.268: INFO: Started pod startup-override-08cbe066-c701-49f6-a599-a453e9956d48 in namespace container-probe-1703 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:24:34.271: INFO: Initial restart count of pod startup-override-08cbe066-c701-49f6-a599-a453e9956d48 is 0 Apr 29 23:24:36.279: INFO: Restart count of pod container-probe-1703/startup-override-08cbe066-c701-49f6-a599-a453e9956d48 is now 1 (2.008231586s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:36.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1703" for this suite. • [SLOW TEST:6.066 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":3,"skipped":436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:33.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 29 23:24:33.365: INFO: Waiting up to 5m0s for pod "security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02" in namespace "security-context-3367" to be "Succeeded or Failed" Apr 29 23:24:33.366: INFO: Pod "security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02": Phase="Pending", Reason="", readiness=false. Elapsed: 1.750502ms Apr 29 23:24:35.370: INFO: Pod "security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005536404s Apr 29 23:24:37.374: INFO: Pod "security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009506598s STEP: Saw pod success Apr 29 23:24:37.374: INFO: Pod "security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02" satisfied condition "Succeeded or Failed" Apr 29 23:24:37.378: INFO: Trying to get logs from node node2 pod security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02 container test-container: STEP: delete the pod Apr 29 23:24:37.392: INFO: Waiting for pod security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02 to disappear Apr 29 23:24:37.394: INFO: Pod security-context-a70a83f6-90ad-4fb2-bf8f-e152f72bdc02 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:37.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3367" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":448,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:36.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 29 23:24:36.715: INFO: Waiting up to 5m0s for pod "security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9" in namespace "security-context-6601" to be "Succeeded or Failed" Apr 29 23:24:36.717: INFO: Pod "security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.444388ms Apr 29 23:24:38.720: INFO: Pod "security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004827349s Apr 29 23:24:40.724: INFO: Pod "security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009291284s STEP: Saw pod success Apr 29 23:24:40.724: INFO: Pod "security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9" satisfied condition "Succeeded or Failed" Apr 29 23:24:40.727: INFO: Trying to get logs from node node1 pod security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9 container test-container: STEP: delete the pod Apr 29 23:24:40.744: INFO: Waiting for pod security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9 to disappear Apr 29 23:24:40.746: INFO: Pod security-context-4bf7c6e6-b3c2-4350-9559-1863771372a9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:40.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6601" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:19.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Apr 29 23:24:41.111: INFO: The status of Pod startup-f78fe45d-0d1f-471f-b63c-dbfaf2e0b6df is Running (Ready = true) Apr 29 23:24:41.114: INFO: Container started at 2022-04-29 23:24:41.109232988 +0000 UTC m=+82.825051174, pod became ready at 2022-04-29 23:24:41.111754724 +0000 UTC m=+82.827572895, 2.521721ms after startupProbe succeeded [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:41.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7705" for this suite. • [SLOW TEST:22.060 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":3,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:37.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 29 23:24:37.471: INFO: Waiting up to 5m0s for pod "security-context-bff89843-34de-4ace-a7fa-71b167260b23" in namespace "security-context-7099" to be "Succeeded or Failed" Apr 29 23:24:37.473: INFO: Pod "security-context-bff89843-34de-4ace-a7fa-71b167260b23": Phase="Pending", Reason="", readiness=false. Elapsed: 1.940218ms Apr 29 23:24:39.478: INFO: Pod "security-context-bff89843-34de-4ace-a7fa-71b167260b23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007058571s Apr 29 23:24:41.482: INFO: Pod "security-context-bff89843-34de-4ace-a7fa-71b167260b23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011136326s STEP: Saw pod success Apr 29 23:24:41.482: INFO: Pod "security-context-bff89843-34de-4ace-a7fa-71b167260b23" satisfied condition "Succeeded or Failed" Apr 29 23:24:41.484: INFO: Trying to get logs from node node2 pod security-context-bff89843-34de-4ace-a7fa-71b167260b23 container test-container: STEP: delete the pod Apr 29 23:24:41.496: INFO: Waiting for pod security-context-bff89843-34de-4ace-a7fa-71b167260b23 to disappear Apr 29 23:24:41.498: INFO: Pod security-context-bff89843-34de-4ace-a7fa-71b167260b23 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:41.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7099" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":4,"skipped":465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:40.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:42.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5201" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":5,"skipped":715,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:19.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Apr 29 23:24:43.496: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:43.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4586" for this suite. • [SLOW TEST:24.065 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":5,"skipped":406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:41.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:45.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-491" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":4,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:42.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Apr 29 23:24:42.989: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-7242e1f5-3632-4c2c-9609-a19807a3c4be" in namespace "security-context-test-7623" to be "Succeeded or Failed" Apr 29 23:24:43.019: INFO: Pod "alpine-nnp-true-7242e1f5-3632-4c2c-9609-a19807a3c4be": Phase="Pending", Reason="", readiness=false. Elapsed: 29.952045ms Apr 29 23:24:45.023: INFO: Pod "alpine-nnp-true-7242e1f5-3632-4c2c-9609-a19807a3c4be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033334055s Apr 29 23:24:47.027: INFO: Pod "alpine-nnp-true-7242e1f5-3632-4c2c-9609-a19807a3c4be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038078428s Apr 29 23:24:47.027: INFO: Pod "alpine-nnp-true-7242e1f5-3632-4c2c-9609-a19807a3c4be" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:47.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7623" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:47.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:47.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-1385" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":7,"skipped":828,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:43.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Apr 29 23:24:43.912: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-9744cb44-9f76-479e-9dda-9b81a1cfaadf" in namespace "security-context-test-8853" to be "Succeeded or Failed" Apr 29 23:24:43.914: INFO: Pod "alpine-nnp-nil-9744cb44-9f76-479e-9dda-9b81a1cfaadf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131719ms Apr 29 23:24:45.916: INFO: Pod "alpine-nnp-nil-9744cb44-9f76-479e-9dda-9b81a1cfaadf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004797462s Apr 29 23:24:47.921: INFO: Pod "alpine-nnp-nil-9744cb44-9f76-479e-9dda-9b81a1cfaadf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009113756s Apr 29 23:24:49.924: INFO: Pod "alpine-nnp-nil-9744cb44-9f76-479e-9dda-9b81a1cfaadf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012090017s Apr 29 23:24:49.924: INFO: Pod "alpine-nnp-nil-9744cb44-9f76-479e-9dda-9b81a1cfaadf" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:49.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8853" for this suite. • [SLOW TEST:6.061 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:50.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Apr 29 23:24:50.620: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3466" to be "Succeeded or Failed" Apr 29 23:24:50.622: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115308ms Apr 29 23:24:52.626: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005947778s Apr 29 23:24:54.629: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009523472s Apr 29 23:24:54.630: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:24:54.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3466" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:54.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Apr 29 23:24:54.932: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Apr 29 23:24:55.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1358 create -f -' Apr 29 23:24:55.408: INFO: stderr: "" Apr 29 23:24:55.408: INFO: stdout: "secret/test-secret created\n" Apr 29 23:24:55.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1358 create -f -' Apr 29 23:24:55.762: INFO: stderr: "" Apr 29 23:24:55.762: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Apr 29 23:25:01.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-1358 logs secret-test-pod test-container' Apr 29 23:25:01.946: INFO: stderr: "" Apr 29 23:25:01.946: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:01.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1358" for this suite. • [SLOW TEST:7.054 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":8,"skipped":1082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:02.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Apr 29 23:25:02.040: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-9269" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:02.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Apr 29 23:25:02.289: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:02.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-2706" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.028 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:45.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Apr 29 23:24:45.495: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:47.498: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:49.498: INFO: The status of Pod master is Running (Ready = true) Apr 29 23:24:49.513: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:51.520: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:53.517: INFO: The status of Pod slave is Running (Ready = true) Apr 29 23:24:53.533: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:55.535: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:57.543: INFO: The status of Pod private is Running (Ready = true) Apr 29 23:24:57.559: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:24:59.564: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:25:01.564: INFO: The status of Pod default is Running (Ready = true) Apr 29 23:25:01.568: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:01.568: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:01.674: INFO: Exec stderr: "" Apr 29 23:25:01.677: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:01.677: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:01.767: INFO: Exec stderr: "" Apr 29 23:25:01.769: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:01.769: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:01.854: INFO: Exec stderr: "" Apr 29 23:25:01.856: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:01.857: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:01.940: INFO: Exec stderr: "" Apr 29 23:25:01.943: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:01.943: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.051: INFO: Exec stderr: "" Apr 29 23:25:02.053: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.053: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.134: INFO: Exec stderr: "" Apr 29 23:25:02.136: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.136: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.223: INFO: Exec stderr: "" Apr 29 23:25:02.225: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.225: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.305: INFO: Exec stderr: "" Apr 29 23:25:02.307: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.307: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.398: INFO: Exec stderr: "" Apr 29 23:25:02.401: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.401: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.498: INFO: Exec stderr: "" Apr 29 23:25:02.501: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.501: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.589: INFO: Exec stderr: "" Apr 29 23:25:02.591: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.591: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.669: INFO: Exec stderr: "" Apr 29 23:25:02.672: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.672: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.767: INFO: Exec stderr: "" Apr 29 23:25:02.770: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.770: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.857: INFO: Exec stderr: "" Apr 29 23:25:02.859: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.859: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:02.940: INFO: Exec stderr: "" Apr 29 23:25:02.942: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:02.942: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:03.035: INFO: Exec stderr: "" Apr 29 23:25:03.037: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:03.037: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:03.120: INFO: Exec stderr: "" Apr 29 23:25:03.123: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:03.123: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:03.214: INFO: Exec stderr: "" Apr 29 23:25:03.217: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:03.217: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:03.302: INFO: Exec stderr: "" Apr 29 23:25:03.305: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:03.305: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:03.391: INFO: Exec stderr: "" Apr 29 23:25:05.410: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7227"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7227"/host; echo host > "/var/lib/kubelet/mount-propagation-7227"/host/file] Namespace:mount-propagation-7227 PodName:hostexec-node1-899b5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 29 23:25:05.410: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:05.504: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:05.504: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:05.597: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:05.600: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:05.600: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:05.681: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:05.683: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:05.683: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:05.770: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:05.774: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:05.774: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:05.856: INFO: pod default mount default: stdout: "default", stderr: "" error: Apr 29 23:25:05.859: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:05.859: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:05.950: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:05.952: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:05.952: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.039: INFO: pod master mount master: stdout: "master", stderr: "" error: Apr 29 23:25:06.041: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.041: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.138: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:06.141: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.141: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.225: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:06.228: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.228: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.313: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:06.315: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.315: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.404: INFO: pod master mount host: stdout: "host", stderr: "" error: Apr 29 23:25:06.406: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.406: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.494: INFO: pod slave mount master: stdout: "master", stderr: "" error: Apr 29 23:25:06.497: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.497: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.577: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Apr 29 23:25:06.579: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.579: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.669: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:06.671: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.671: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.768: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:06.771: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.771: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.865: INFO: pod slave mount host: stdout: "host", stderr: "" error: Apr 29 23:25:06.867: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.867: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:06.950: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:06.953: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:06.953: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.060: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:07.063: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.063: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.165: INFO: pod private mount private: stdout: "private", stderr: "" error: Apr 29 23:25:07.168: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.168: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.248: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:07.251: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.251: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.343: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Apr 29 23:25:07.343: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7227"/master/file` = master] Namespace:mount-propagation-7227 PodName:hostexec-node1-899b5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 29 23:25:07.343: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.437: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7227"/slave/file] Namespace:mount-propagation-7227 PodName:hostexec-node1-899b5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 29 23:25:07.437: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.525: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7227"/host] Namespace:mount-propagation-7227 PodName:hostexec-node1-899b5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 29 23:25:07.525: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.635: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-7227 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.635: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.733: INFO: Exec stderr: "" Apr 29 23:25:07.735: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-7227 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.735: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.851: INFO: Exec stderr: "" Apr 29 23:25:07.853: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-7227 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.853: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:07.945: INFO: Exec stderr: "" Apr 29 23:25:07.947: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-7227 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 29 23:25:07.947: INFO: >>> kubeConfig: /root/.kube/config Apr 29 23:25:08.040: INFO: Exec stderr: "" Apr 29 23:25:08.040: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-7227"] Namespace:mount-propagation-7227 PodName:hostexec-node1-899b5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Apr 29 23:25:08.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-899b5 in namespace mount-propagation-7227 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:08.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-7227" for this suite. • [SLOW TEST:22.685 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:02.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Apr 29 23:25:02.386: INFO: Waiting up to 5m0s for pod "pod-always-succeeddf2877d8-2d55-40de-9610-4f7bbdd269a3" in namespace "pods-8492" to be "Succeeded or Failed" Apr 29 23:25:02.388: INFO: Pod "pod-always-succeeddf2877d8-2d55-40de-9610-4f7bbdd269a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.418059ms Apr 29 23:25:04.392: INFO: Pod "pod-always-succeeddf2877d8-2d55-40de-9610-4f7bbdd269a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00592198s Apr 29 23:25:06.396: INFO: Pod "pod-always-succeeddf2877d8-2d55-40de-9610-4f7bbdd269a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010176899s Apr 29 23:25:08.399: INFO: Pod "pod-always-succeeddf2877d8-2d55-40de-9610-4f7bbdd269a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012915904s STEP: Saw pod success Apr 29 23:25:08.399: INFO: Pod "pod-always-succeeddf2877d8-2d55-40de-9610-4f7bbdd269a3" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:10.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8492" for this suite. • [SLOW TEST:8.062 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":9,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:08.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Apr 29 23:25:08.453: INFO: Waiting up to 5m0s for pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9" in namespace "security-context-8550" to be "Succeeded or Failed" Apr 29 23:25:08.455: INFO: Pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014551ms Apr 29 23:25:10.458: INFO: Pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005604963s Apr 29 23:25:12.463: INFO: Pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010354718s Apr 29 23:25:14.469: INFO: Pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016457996s Apr 29 23:25:16.474: INFO: Pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021613619s STEP: Saw pod success Apr 29 23:25:16.474: INFO: Pod "security-context-98bc198d-cd97-42a2-ac72-6827180921a9" satisfied condition "Succeeded or Failed" Apr 29 23:25:16.476: INFO: Trying to get logs from node node2 pod security-context-98bc198d-cd97-42a2-ac72-6827180921a9 container test-container: STEP: delete the pod Apr 29 23:25:16.570: INFO: Waiting for pod security-context-98bc198d-cd97-42a2-ac72-6827180921a9 to disappear Apr 29 23:25:16.572: INFO: Pod security-context-98bc198d-cd97-42a2-ac72-6827180921a9 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:16.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8550" for this suite. • [SLOW TEST:8.247 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:03.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-1fe54e49-b9b3-40c1-968f-3c1de1d5bec1 in namespace container-probe-3316 Apr 29 23:24:09.716: INFO: Started pod startup-1fe54e49-b9b3-40c1-968f-3c1de1d5bec1 in namespace container-probe-3316 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:24:09.718: INFO: Initial restart count of pod startup-1fe54e49-b9b3-40c1-968f-3c1de1d5bec1 is 0 Apr 29 23:25:19.863: INFO: Restart count of pod container-probe-3316/startup-1fe54e49-b9b3-40c1-968f-3c1de1d5bec1 is now 1 (1m10.144176297s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:19.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3316" for this suite. • [SLOW TEST:76.200 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":5,"skipped":635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:21.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 in namespace container-probe-3195 Apr 29 23:24:25.487: INFO: Started pod busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 in namespace container-probe-3195 Apr 29 23:24:25.487: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (2.105µs elapsed) Apr 29 23:24:27.487: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (2.000166955s elapsed) Apr 29 23:24:29.488: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (4.001020165s elapsed) Apr 29 23:24:31.489: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (6.002417914s elapsed) Apr 29 23:24:33.490: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (8.002722613s elapsed) Apr 29 23:24:35.490: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (10.003327472s elapsed) Apr 29 23:24:37.492: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (12.004546337s elapsed) Apr 29 23:24:39.492: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (14.005281616s elapsed) Apr 29 23:24:41.494: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (16.007173842s elapsed) Apr 29 23:24:43.495: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (18.008177863s elapsed) Apr 29 23:24:45.496: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (20.008635348s elapsed) Apr 29 23:24:47.499: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (22.011839789s elapsed) Apr 29 23:24:49.500: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (24.012987772s elapsed) Apr 29 23:24:51.500: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (26.013216096s elapsed) Apr 29 23:24:53.501: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (28.014431035s elapsed) Apr 29 23:24:55.502: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (30.015036918s elapsed) Apr 29 23:24:57.508: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (32.020561036s elapsed) Apr 29 23:24:59.511: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (34.023841528s elapsed) Apr 29 23:25:01.516: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (36.029035419s elapsed) Apr 29 23:25:03.516: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (38.029444566s elapsed) Apr 29 23:25:05.517: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (40.029758414s elapsed) Apr 29 23:25:07.518: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (42.030968971s elapsed) Apr 29 23:25:09.519: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (44.032162412s elapsed) Apr 29 23:25:11.523: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (46.035542477s elapsed) Apr 29 23:25:13.524: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (48.036700958s elapsed) Apr 29 23:25:15.526: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (50.03857883s elapsed) Apr 29 23:25:17.528: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (52.041412848s elapsed) Apr 29 23:25:19.530: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (54.042915169s elapsed) Apr 29 23:25:21.535: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (56.048147988s elapsed) Apr 29 23:25:23.536: INFO: pod container-probe-3195/busybox-899f4b7d-f90a-4321-8e41-4ed9e1c89de0 is not ready (58.048613771s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:25.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3195" for this suite. • [SLOW TEST:64.101 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":6,"skipped":590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:20.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 29 23:25:27.330: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:27.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1622" for this suite. • [SLOW TEST:7.078 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":6,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Apr 29 23:25:27.435: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":6,"skipped":562,"failed":0} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:16.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 29 23:25:16.612: INFO: Waiting up to 5m0s for pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79" in namespace "security-context-6392" to be "Succeeded or Failed" Apr 29 23:25:16.614: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.228743ms Apr 29 23:25:18.616: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004803858s Apr 29 23:25:20.621: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009019472s Apr 29 23:25:22.626: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01390411s Apr 29 23:25:24.632: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020238705s Apr 29 23:25:26.639: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02711073s Apr 29 23:25:28.643: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030928824s STEP: Saw pod success Apr 29 23:25:28.643: INFO: Pod "security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79" satisfied condition "Succeeded or Failed" Apr 29 23:25:28.645: INFO: Trying to get logs from node node2 pod security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79 container test-container: STEP: delete the pod Apr 29 23:25:28.731: INFO: Waiting for pod security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79 to disappear Apr 29 23:25:28.734: INFO: Pod security-context-fdc01afc-f687-45eb-b296-6b9dc6e2cd79 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:28.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6392" for this suite. • [SLOW TEST:12.159 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":7,"skipped":562,"failed":0} Apr 29 23:25:28.743: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:10.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 29 23:25:25.809: INFO: start=2022-04-29 23:25:20.792140318 +0000 UTC m=+122.509222623, now=2022-04-29 23:25:25.809063623 +0000 UTC m=+127.526146018, kubelet pod: {"metadata":{"name":"pod-submit-remove-db47fec9-67f0-4ab8-997e-c23c91d7f10e","namespace":"pods-5983","uid":"75ba4d3f-8bc7-400a-8c31-fb07529a2e84","resourceVersion":"79335","creationTimestamp":"2022-04-29T23:25:10Z","deletionTimestamp":"2022-04-29T23:25:50Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"753824143"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.215\"\n ],\n \"mac\": \"f6:66:fe:08:32:b3\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.215\"\n ],\n \"mac\": \"f6:66:fe:08:32:b3\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-04-29T23:25:10.768737955Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-04-29T23:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-p9x5g","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-p9x5g","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T23:25:10Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T23:25:15Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T23:25:15Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-04-29T23:25:10Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.215","podIPs":[{"ip":"10.244.4.215"}],"startTime":"2022-04-29T23:25:10Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2022-04-29T23:25:15Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://1208e3619c10858686819859e6aadc5f867dd02baf5c80d62c25178967d0c913","started":true}],"qosClass":"BestEffort"}} Apr 29 23:25:30.808: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:30.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5983" for this suite. • [SLOW TEST:20.086 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:41.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Apr 29 23:24:41.813: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Apr 29 23:24:41.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5411 create -f -' Apr 29 23:24:42.265: INFO: stderr: "" Apr 29 23:24:42.265: INFO: stdout: "pod/liveness-exec created\n" Apr 29 23:24:42.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5411 create -f -' Apr 29 23:24:42.637: INFO: stderr: "" Apr 29 23:24:42.637: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Apr 29 23:24:46.653: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:24:46.653: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:48.655: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:24:48.656: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:50.658: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:24:50.659: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:52.664: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:52.664: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:24:54.671: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:24:54.671: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:56.677: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:56.678: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:24:58.679: INFO: Pod: liveness-http, restart count:0 Apr 29 23:24:58.681: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:00.683: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:00.684: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:02.686: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:02.688: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:04.689: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:04.691: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:06.694: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:06.694: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:08.697: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:08.697: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:10.700: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:10.700: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:12.703: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:12.704: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:14.709: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:14.709: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:16.713: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:16.713: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:18.716: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:18.716: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:20.719: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:20.719: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:22.724: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:22.724: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:24.728: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:24.728: INFO: Pod: liveness-http, restart count:0 Apr 29 23:25:26.734: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:26.734: INFO: Pod: liveness-http, restart count:1 Apr 29 23:25:26.734: INFO: Saw liveness-http restart, succeeded... Apr 29 23:25:28.738: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:30.744: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:32.749: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:34.753: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:36.761: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:38.765: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:40.770: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:42.775: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:44.781: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:46.787: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:48.791: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:50.794: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:52.798: INFO: Pod: liveness-exec, restart count:0 Apr 29 23:25:54.802: INFO: Pod: liveness-exec, restart count:1 Apr 29 23:25:54.802: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:25:54.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5411" for this suite. • [SLOW TEST:73.026 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":5,"skipped":614,"failed":0} Apr 29 23:25:54.813: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:24:47.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Apr 29 23:24:55.189: INFO: watch delete seen for pod-submit-status-0-0 Apr 29 23:24:55.189: INFO: Pod pod-submit-status-0-0 on node node1 timings total=7.888960543s t=1.367s run=0s execute=0s Apr 29 23:24:57.679: INFO: watch delete seen for pod-submit-status-0-1 Apr 29 23:24:57.679: INFO: Pod pod-submit-status-0-1 on node node2 timings total=2.490148541s t=420ms run=0s execute=0s Apr 29 23:25:02.652: INFO: watch delete seen for pod-submit-status-2-0 Apr 29 23:25:02.652: INFO: Pod pod-submit-status-2-0 on node node2 timings total=15.351423999s t=1.077s run=0s execute=0s Apr 29 23:25:05.189: INFO: watch delete seen for pod-submit-status-0-2 Apr 29 23:25:05.189: INFO: Pod pod-submit-status-0-2 on node node2 timings total=7.509810606s t=1.926s run=3s execute=0s Apr 29 23:25:05.200: INFO: watch delete seen for pod-submit-status-1-0 Apr 29 23:25:05.200: INFO: Pod pod-submit-status-1-0 on node node2 timings total=17.899472501s t=1.672s run=0s execute=0s Apr 29 23:25:07.241: INFO: watch delete seen for pod-submit-status-2-1 Apr 29 23:25:07.241: INFO: Pod pod-submit-status-2-1 on node node2 timings total=4.58922693s t=610ms run=0s execute=0s Apr 29 23:25:09.799: INFO: watch delete seen for pod-submit-status-1-1 Apr 29 23:25:09.800: INFO: Pod pod-submit-status-1-1 on node node2 timings total=4.599730963s t=163ms run=0s execute=0s Apr 29 23:25:11.209: INFO: watch delete seen for pod-submit-status-0-3 Apr 29 23:25:11.209: INFO: Pod pod-submit-status-0-3 on node node2 timings total=6.019198343s t=1.322s run=0s execute=0s Apr 29 23:25:17.199: INFO: watch delete seen for pod-submit-status-2-2 Apr 29 23:25:17.199: INFO: Pod pod-submit-status-2-2 on node node2 timings total=9.957714764s t=838ms run=0s execute=0s Apr 29 23:25:17.998: INFO: watch delete seen for pod-submit-status-1-2 Apr 29 23:25:17.998: INFO: Pod pod-submit-status-1-2 on node node2 timings total=8.198145793s t=1.56s run=0s execute=0s Apr 29 23:25:19.399: INFO: watch delete seen for pod-submit-status-0-4 Apr 29 23:25:19.399: INFO: Pod pod-submit-status-0-4 on node node2 timings total=8.189973151s t=1.454s run=0s execute=0s Apr 29 23:25:22.596: INFO: watch delete seen for pod-submit-status-2-3 Apr 29 23:25:22.596: INFO: Pod pod-submit-status-2-3 on node node2 timings total=5.39739083s t=991ms run=0s execute=0s Apr 29 23:25:24.198: INFO: watch delete seen for pod-submit-status-0-5 Apr 29 23:25:24.198: INFO: Pod pod-submit-status-0-5 on node node2 timings total=4.799244091s t=863ms run=0s execute=0s Apr 29 23:25:25.166: INFO: watch delete seen for pod-submit-status-1-3 Apr 29 23:25:25.166: INFO: Pod pod-submit-status-1-3 on node node1 timings total=7.168341371s t=1.018s run=0s execute=0s Apr 29 23:25:29.997: INFO: watch delete seen for pod-submit-status-1-4 Apr 29 23:25:29.997: INFO: Pod pod-submit-status-1-4 on node node2 timings total=4.831242894s t=1.378s run=0s execute=0s Apr 29 23:25:32.598: INFO: watch delete seen for pod-submit-status-0-6 Apr 29 23:25:32.598: INFO: Pod pod-submit-status-0-6 on node node2 timings total=8.39989527s t=272ms run=0s execute=0s Apr 29 23:25:35.139: INFO: watch delete seen for pod-submit-status-2-4 Apr 29 23:25:35.139: INFO: Pod pod-submit-status-2-4 on node node1 timings total=12.542406232s t=1.69s run=2s execute=0s Apr 29 23:25:35.998: INFO: watch delete seen for pod-submit-status-1-5 Apr 29 23:25:35.998: INFO: Pod pod-submit-status-1-5 on node node2 timings total=6.000972607s t=1.475s run=2s execute=0s Apr 29 23:25:37.601: INFO: watch delete seen for pod-submit-status-0-7 Apr 29 23:25:37.601: INFO: Pod pod-submit-status-0-7 on node node2 timings total=5.003174645s t=797ms run=0s execute=0s Apr 29 23:25:40.298: INFO: watch delete seen for pod-submit-status-0-8 Apr 29 23:25:40.298: INFO: Pod pod-submit-status-0-8 on node node2 timings total=2.696948807s t=148ms run=0s execute=0s Apr 29 23:25:42.603: INFO: watch delete seen for pod-submit-status-2-5 Apr 29 23:25:42.603: INFO: Pod pod-submit-status-2-5 on node node2 timings total=7.463802403s t=1.646s run=0s execute=0s Apr 29 23:25:43.398: INFO: watch delete seen for pod-submit-status-1-6 Apr 29 23:25:43.398: INFO: Pod pod-submit-status-1-6 on node node2 timings total=7.399467268s t=1.935s run=0s execute=0s Apr 29 23:25:45.522: INFO: watch delete seen for pod-submit-status-2-6 Apr 29 23:25:45.522: INFO: Pod pod-submit-status-2-6 on node node1 timings total=2.919601168s t=221ms run=0s execute=0s Apr 29 23:25:55.140: INFO: watch delete seen for pod-submit-status-0-9 Apr 29 23:25:55.140: INFO: Pod pod-submit-status-0-9 on node node1 timings total=14.841565097s t=1.276s run=2s execute=0s Apr 29 23:25:55.187: INFO: watch delete seen for pod-submit-status-1-7 Apr 29 23:25:55.187: INFO: Pod pod-submit-status-1-7 on node node2 timings total=11.788872315s t=965ms run=0s execute=0s Apr 29 23:25:55.196: INFO: watch delete seen for pod-submit-status-2-7 Apr 29 23:25:55.196: INFO: Pod pod-submit-status-2-7 on node node2 timings total=9.673294088s t=1.342s run=0s execute=0s Apr 29 23:25:58.182: INFO: watch delete seen for pod-submit-status-1-8 Apr 29 23:25:58.182: INFO: Pod pod-submit-status-1-8 on node node2 timings total=2.995319536s t=879ms run=0s execute=0s Apr 29 23:26:05.143: INFO: watch delete seen for pod-submit-status-1-9 Apr 29 23:26:05.143: INFO: Pod pod-submit-status-1-9 on node node1 timings total=6.960269869s t=843ms run=0s execute=0s Apr 29 23:26:05.154: INFO: watch delete seen for pod-submit-status-2-8 Apr 29 23:26:05.154: INFO: Pod pod-submit-status-2-8 on node node1 timings total=9.958166194s t=1.811s run=0s execute=0s Apr 29 23:26:05.203: INFO: watch delete seen for pod-submit-status-0-10 Apr 29 23:26:05.203: INFO: Pod pod-submit-status-0-10 on node node2 timings total=10.062993952s t=1.193s run=0s execute=0s Apr 29 23:26:15.154: INFO: watch delete seen for pod-submit-status-1-10 Apr 29 23:26:15.154: INFO: Pod pod-submit-status-1-10 on node node1 timings total=10.011611461s t=1.256s run=0s execute=0s Apr 29 23:26:15.202: INFO: watch delete seen for pod-submit-status-2-9 Apr 29 23:26:15.202: INFO: Pod pod-submit-status-2-9 on node node2 timings total=10.047938113s t=1.761s run=2s execute=0s Apr 29 23:26:17.400: INFO: watch delete seen for pod-submit-status-2-10 Apr 29 23:26:17.400: INFO: Pod pod-submit-status-2-10 on node node2 timings total=2.197892556s t=603ms run=0s execute=0s Apr 29 23:26:25.137: INFO: watch delete seen for pod-submit-status-2-11 Apr 29 23:26:25.137: INFO: Pod pod-submit-status-2-11 on node node1 timings total=7.736997389s t=1.179s run=0s execute=0s Apr 29 23:26:25.183: INFO: watch delete seen for pod-submit-status-1-11 Apr 29 23:26:25.183: INFO: Pod pod-submit-status-1-11 on node node2 timings total=10.028885865s t=1.004s run=0s execute=0s Apr 29 23:26:35.136: INFO: watch delete seen for pod-submit-status-2-12 Apr 29 23:26:35.136: INFO: Pod pod-submit-status-2-12 on node node1 timings total=9.999636219s t=1.291s run=0s execute=0s Apr 29 23:26:35.146: INFO: watch delete seen for pod-submit-status-1-12 Apr 29 23:26:35.147: INFO: Pod pod-submit-status-1-12 on node node1 timings total=9.963151161s t=141ms run=0s execute=0s Apr 29 23:26:45.138: INFO: watch delete seen for pod-submit-status-2-13 Apr 29 23:26:45.138: INFO: Pod pod-submit-status-2-13 on node node1 timings total=10.001321959s t=285ms run=0s execute=0s Apr 29 23:26:45.148: INFO: watch delete seen for pod-submit-status-1-13 Apr 29 23:26:45.148: INFO: Pod pod-submit-status-1-13 on node node1 timings total=10.001168868s t=264ms run=0s execute=0s Apr 29 23:26:55.187: INFO: watch delete seen for pod-submit-status-1-14 Apr 29 23:26:55.187: INFO: Pod pod-submit-status-1-14 on node node2 timings total=10.039538697s t=1.162s run=0s execute=0s Apr 29 23:26:55.196: INFO: watch delete seen for pod-submit-status-2-14 Apr 29 23:26:55.196: INFO: Pod pod-submit-status-2-14 on node node2 timings total=10.058304571s t=1.271s run=0s execute=0s Apr 29 23:27:00.620: INFO: watch delete seen for pod-submit-status-0-11 Apr 29 23:27:00.620: INFO: Pod pod-submit-status-0-11 on node node1 timings total=55.417400426s t=114ms run=0s execute=0s Apr 29 23:27:02.110: INFO: watch delete seen for pod-submit-status-0-12 Apr 29 23:27:02.110: INFO: Pod pod-submit-status-0-12 on node node2 timings total=1.489994828s t=245ms run=0s execute=0s Apr 29 23:27:15.190: INFO: watch delete seen for pod-submit-status-0-13 Apr 29 23:27:15.191: INFO: Pod pod-submit-status-0-13 on node node2 timings total=13.080228175s t=363ms run=0s execute=0s Apr 29 23:27:25.134: INFO: watch delete seen for pod-submit-status-0-14 Apr 29 23:27:25.135: INFO: Pod pod-submit-status-0-14 on node node1 timings total=9.943870488s t=1.689s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:27:25.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8588" for this suite. • [SLOW TEST:157.862 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":8,"skipped":838,"failed":0} Apr 29 23:27:25.147: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:26.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-7fd761cc-10c6-4d99-8d1a-d9ab22bd8e97 in namespace container-probe-3716 Apr 29 23:23:44.434: INFO: Started pod startup-7fd761cc-10c6-4d99-8d1a-d9ab22bd8e97 in namespace container-probe-3716 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:23:44.436: INFO: Initial restart count of pod startup-7fd761cc-10c6-4d99-8d1a-d9ab22bd8e97 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:27:45.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3716" for this suite. • [SLOW TEST:258.633 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":3,"skipped":57,"failed":0} Apr 29 23:27:45.033: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:34.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Apr 29 23:23:34.657: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Apr 29 23:23:35.668: INFO: node status heartbeat is unchanged for 1.003962937s, waiting for 1m20s Apr 29 23:23:36.671: INFO: node status heartbeat is unchanged for 2.006332978s, waiting for 1m20s Apr 29 23:23:37.668: INFO: node status heartbeat is unchanged for 3.004186616s, waiting for 1m20s Apr 29 23:23:38.669: INFO: node status heartbeat is unchanged for 4.004290751s, waiting for 1m20s Apr 29 23:23:39.668: INFO: node status heartbeat is unchanged for 5.003314148s, waiting for 1m20s Apr 29 23:23:40.671: INFO: node status heartbeat is unchanged for 6.006540506s, waiting for 1m20s Apr 29 23:23:41.668: INFO: node status heartbeat is unchanged for 7.004105818s, waiting for 1m20s Apr 29 23:23:42.669: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 29 23:23:42.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:41 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:41 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:41 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:23:43.668: INFO: node status heartbeat is unchanged for 999.476439ms, waiting for 1m20s Apr 29 23:23:44.668: INFO: node status heartbeat is unchanged for 1.999340282s, waiting for 1m20s Apr 29 23:23:45.670: INFO: node status heartbeat is unchanged for 3.000754141s, waiting for 1m20s Apr 29 23:23:46.668: INFO: node status heartbeat is unchanged for 3.99927931s, waiting for 1m20s Apr 29 23:23:47.670: INFO: node status heartbeat is unchanged for 5.000894361s, waiting for 1m20s Apr 29 23:23:48.668: INFO: node status heartbeat is unchanged for 5.999250975s, waiting for 1m20s Apr 29 23:23:49.669: INFO: node status heartbeat is unchanged for 7.000575393s, waiting for 1m20s Apr 29 23:23:50.669: INFO: node status heartbeat is unchanged for 8.000080509s, waiting for 1m20s Apr 29 23:23:51.668: INFO: node status heartbeat is unchanged for 8.99899257s, waiting for 1m20s Apr 29 23:23:52.670: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:23:52.675: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:51 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:23:53.669: INFO: node status heartbeat is unchanged for 998.889689ms, waiting for 1m20s Apr 29 23:23:54.670: INFO: node status heartbeat is unchanged for 2.000710451s, waiting for 1m20s Apr 29 23:23:55.669: INFO: node status heartbeat is unchanged for 2.999612694s, waiting for 1m20s Apr 29 23:23:56.670: INFO: node status heartbeat is unchanged for 4.000216039s, waiting for 1m20s Apr 29 23:23:57.669: INFO: node status heartbeat is unchanged for 4.999346009s, waiting for 1m20s Apr 29 23:23:58.669: INFO: node status heartbeat is unchanged for 5.999670196s, waiting for 1m20s Apr 29 23:23:59.669: INFO: node status heartbeat is unchanged for 6.998994795s, waiting for 1m20s Apr 29 23:24:00.670: INFO: node status heartbeat is unchanged for 8.000015214s, waiting for 1m20s Apr 29 23:24:01.668: INFO: node status heartbeat is unchanged for 8.998838098s, waiting for 1m20s Apr 29 23:24:02.671: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:24:02.676: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:01 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:01 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:23:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:01 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:24:03.668: INFO: node status heartbeat is unchanged for 997.65696ms, waiting for 1m20s Apr 29 23:24:04.671: INFO: node status heartbeat is unchanged for 1.999891204s, waiting for 1m20s Apr 29 23:24:05.669: INFO: node status heartbeat is unchanged for 2.998335261s, waiting for 1m20s Apr 29 23:24:06.671: INFO: node status heartbeat is unchanged for 4.00069845s, waiting for 1m20s Apr 29 23:24:07.669: INFO: node status heartbeat is unchanged for 4.998340636s, waiting for 1m20s Apr 29 23:24:08.668: INFO: node status heartbeat is unchanged for 5.997238446s, waiting for 1m20s Apr 29 23:24:09.668: INFO: node status heartbeat is unchanged for 6.997318783s, waiting for 1m20s Apr 29 23:24:10.668: INFO: node status heartbeat is unchanged for 7.997465107s, waiting for 1m20s Apr 29 23:24:11.670: INFO: node status heartbeat is unchanged for 8.999141245s, waiting for 1m20s Apr 29 23:24:12.670: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:24:12.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:11 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:11 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:11 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:24:13.668: INFO: node status heartbeat is unchanged for 998.775821ms, waiting for 1m20s Apr 29 23:24:14.668: INFO: node status heartbeat is unchanged for 1.99876076s, waiting for 1m20s Apr 29 23:24:15.670: INFO: node status heartbeat is unchanged for 3.000278761s, waiting for 1m20s Apr 29 23:24:16.668: INFO: node status heartbeat is unchanged for 3.998953247s, waiting for 1m20s Apr 29 23:24:17.669: INFO: node status heartbeat is unchanged for 5.00002784s, waiting for 1m20s Apr 29 23:24:18.669: INFO: node status heartbeat is unchanged for 5.999269397s, waiting for 1m20s Apr 29 23:24:19.667: INFO: node status heartbeat is unchanged for 6.997746514s, waiting for 1m20s Apr 29 23:24:20.669: INFO: node status heartbeat is unchanged for 7.999650201s, waiting for 1m20s Apr 29 23:24:21.669: INFO: node status heartbeat is unchanged for 8.999183757s, waiting for 1m20s Apr 29 23:24:22.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:24:22.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:21 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:21 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:21 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:24:23.668: INFO: node status heartbeat is unchanged for 998.662704ms, waiting for 1m20s Apr 29 23:24:24.667: INFO: node status heartbeat is unchanged for 1.998390352s, waiting for 1m20s Apr 29 23:24:25.669: INFO: node status heartbeat is unchanged for 2.999756912s, waiting for 1m20s Apr 29 23:24:26.670: INFO: node status heartbeat is unchanged for 4.000513633s, waiting for 1m20s Apr 29 23:24:27.668: INFO: node status heartbeat is unchanged for 4.999055972s, waiting for 1m20s Apr 29 23:24:28.668: INFO: node status heartbeat is unchanged for 5.999071825s, waiting for 1m20s Apr 29 23:24:29.669: INFO: node status heartbeat is unchanged for 6.999601339s, waiting for 1m20s Apr 29 23:24:30.668: INFO: node status heartbeat is unchanged for 7.999031722s, waiting for 1m20s Apr 29 23:24:31.668: INFO: node status heartbeat is unchanged for 8.998858413s, waiting for 1m20s Apr 29 23:24:32.672: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 29 23:24:32.677: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "2a0958eb1b3044f2963c9e5f2e902173", SystemUUID: "00CDA902-D022-E711-906E-0017A4403562", BootID: "fc6a2d14-7726-4aec-9428-6617632ddcbe", KernelVersion: "3.10.0-1160.62.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 25 identical elements    {Names: {"quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1"..., "quay.io/coreos/kube-rbac-proxy:v0.5.0"}, SizeBytes: 46626428},    {Names: {"localhost:30500/sriov-device-plugin@sha256:145b4fe543408db530a0d"..., "nfvpe/sriov-device-plugin:latest", "localhost:30500/sriov-device-plugin:v3.3.2"}, SizeBytes: 42676189}, +  { +  Names: []string{ +  "k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34d"..., +  "k8s.gcr.io/e2e-test-images/nonroot:1.1", +  }, +  SizeBytes: 42321438, +  },    {Names: {"kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f"..., "kubernetesui/metrics-scraper:v1.0.6"}, SizeBytes: 34548789},    {Names: {"localhost:30500/tasextender@sha256:f09acec459e39fddbd00d2ff6975d"..., "localhost:30500/tasextender:0.4"}, SizeBytes: 28910791},    ... // 12 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Apr 29 23:24:33.669: INFO: node status heartbeat is unchanged for 997.031562ms, waiting for 1m20s Apr 29 23:24:34.671: INFO: node status heartbeat is unchanged for 1.999244562s, waiting for 1m20s Apr 29 23:24:35.670: INFO: node status heartbeat is unchanged for 2.998394242s, waiting for 1m20s Apr 29 23:24:36.668: INFO: node status heartbeat is unchanged for 3.996778127s, waiting for 1m20s Apr 29 23:24:37.669: INFO: node status heartbeat is unchanged for 4.997622735s, waiting for 1m20s Apr 29 23:24:38.669: INFO: node status heartbeat is unchanged for 5.997500647s, waiting for 1m20s Apr 29 23:24:39.670: INFO: node status heartbeat is unchanged for 6.998834511s, waiting for 1m20s Apr 29 23:24:40.669: INFO: node status heartbeat is unchanged for 7.99752657s, waiting for 1m20s Apr 29 23:24:41.668: INFO: node status heartbeat is unchanged for 8.996440065s, waiting for 1m20s Apr 29 23:24:42.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:24:42.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:24:43.668: INFO: node status heartbeat is unchanged for 1.000018261s, waiting for 1m20s Apr 29 23:24:44.668: INFO: node status heartbeat is unchanged for 1.999867751s, waiting for 1m20s Apr 29 23:24:45.667: INFO: node status heartbeat is unchanged for 2.999233892s, waiting for 1m20s Apr 29 23:24:46.670: INFO: node status heartbeat is unchanged for 4.001908606s, waiting for 1m20s Apr 29 23:24:47.669: INFO: node status heartbeat is unchanged for 5.001257515s, waiting for 1m20s Apr 29 23:24:48.667: INFO: node status heartbeat is unchanged for 5.998754493s, waiting for 1m20s Apr 29 23:24:49.669: INFO: node status heartbeat is unchanged for 7.000753923s, waiting for 1m20s Apr 29 23:24:50.668: INFO: node status heartbeat is unchanged for 7.999832001s, waiting for 1m20s Apr 29 23:24:51.668: INFO: node status heartbeat is unchanged for 8.999828969s, waiting for 1m20s Apr 29 23:24:52.670: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:24:52.675: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:24:53.668: INFO: node status heartbeat is unchanged for 998.018002ms, waiting for 1m20s Apr 29 23:24:54.668: INFO: node status heartbeat is unchanged for 1.997865676s, waiting for 1m20s Apr 29 23:24:55.668: INFO: node status heartbeat is unchanged for 2.998080277s, waiting for 1m20s Apr 29 23:24:56.670: INFO: node status heartbeat is unchanged for 4.000024203s, waiting for 1m20s Apr 29 23:24:57.668: INFO: node status heartbeat is unchanged for 4.998283483s, waiting for 1m20s Apr 29 23:24:58.668: INFO: node status heartbeat is unchanged for 5.998352418s, waiting for 1m20s Apr 29 23:24:59.669: INFO: node status heartbeat is unchanged for 6.998610082s, waiting for 1m20s Apr 29 23:25:00.669: INFO: node status heartbeat is unchanged for 7.999243622s, waiting for 1m20s Apr 29 23:25:01.668: INFO: node status heartbeat is unchanged for 8.998526671s, waiting for 1m20s Apr 29 23:25:02.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:25:02.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:24:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:25:03.668: INFO: node status heartbeat is unchanged for 999.862052ms, waiting for 1m20s Apr 29 23:25:04.669: INFO: node status heartbeat is unchanged for 2.000186645s, waiting for 1m20s Apr 29 23:25:05.669: INFO: node status heartbeat is unchanged for 3.000411465s, waiting for 1m20s Apr 29 23:25:06.670: INFO: node status heartbeat is unchanged for 4.001952896s, waiting for 1m20s Apr 29 23:25:07.670: INFO: node status heartbeat is unchanged for 5.001814823s, waiting for 1m20s Apr 29 23:25:08.669: INFO: node status heartbeat is unchanged for 6.001018176s, waiting for 1m20s Apr 29 23:25:09.669: INFO: node status heartbeat is unchanged for 7.000504635s, waiting for 1m20s Apr 29 23:25:10.668: INFO: node status heartbeat is unchanged for 7.99914671s, waiting for 1m20s Apr 29 23:25:11.669: INFO: node status heartbeat is unchanged for 9.000721239s, waiting for 1m20s Apr 29 23:25:12.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:25:12.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:25:13.667: INFO: node status heartbeat is unchanged for 998.954987ms, waiting for 1m20s Apr 29 23:25:14.671: INFO: node status heartbeat is unchanged for 2.002628791s, waiting for 1m20s Apr 29 23:25:15.670: INFO: node status heartbeat is unchanged for 3.002061685s, waiting for 1m20s Apr 29 23:25:16.670: INFO: node status heartbeat is unchanged for 4.001808795s, waiting for 1m20s Apr 29 23:25:17.668: INFO: node status heartbeat is unchanged for 4.999730217s, waiting for 1m20s Apr 29 23:25:18.668: INFO: node status heartbeat is unchanged for 6.000033892s, waiting for 1m20s Apr 29 23:25:19.669: INFO: node status heartbeat is unchanged for 7.000572853s, waiting for 1m20s Apr 29 23:25:20.670: INFO: node status heartbeat is unchanged for 8.001454976s, waiting for 1m20s Apr 29 23:25:21.668: INFO: node status heartbeat is unchanged for 8.999937167s, waiting for 1m20s Apr 29 23:25:22.670: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:25:22.675: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:25:23.669: INFO: node status heartbeat is unchanged for 998.523051ms, waiting for 1m20s Apr 29 23:25:24.668: INFO: node status heartbeat is unchanged for 1.998036329s, waiting for 1m20s Apr 29 23:25:25.669: INFO: node status heartbeat is unchanged for 2.998867032s, waiting for 1m20s Apr 29 23:25:26.668: INFO: node status heartbeat is unchanged for 3.998114554s, waiting for 1m20s Apr 29 23:25:27.669: INFO: node status heartbeat is unchanged for 4.998383859s, waiting for 1m20s Apr 29 23:25:28.668: INFO: node status heartbeat is unchanged for 5.997780617s, waiting for 1m20s Apr 29 23:25:29.669: INFO: node status heartbeat is unchanged for 6.998669198s, waiting for 1m20s Apr 29 23:25:30.667: INFO: node status heartbeat is unchanged for 7.99730386s, waiting for 1m20s Apr 29 23:25:31.671: INFO: node status heartbeat is unchanged for 9.001231877s, waiting for 1m20s Apr 29 23:25:32.672: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:25:32.677: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:25:33.670: INFO: node status heartbeat is unchanged for 998.078712ms, waiting for 1m20s Apr 29 23:25:34.667: INFO: node status heartbeat is unchanged for 1.995390621s, waiting for 1m20s Apr 29 23:25:35.668: INFO: node status heartbeat is unchanged for 2.996216412s, waiting for 1m20s Apr 29 23:25:36.670: INFO: node status heartbeat is unchanged for 3.998557709s, waiting for 1m20s Apr 29 23:25:37.668: INFO: node status heartbeat is unchanged for 4.996432236s, waiting for 1m20s Apr 29 23:25:38.669: INFO: node status heartbeat is unchanged for 5.997487628s, waiting for 1m20s Apr 29 23:25:39.669: INFO: node status heartbeat is unchanged for 6.997203444s, waiting for 1m20s Apr 29 23:25:40.669: INFO: node status heartbeat is unchanged for 7.997185264s, waiting for 1m20s Apr 29 23:25:41.669: INFO: node status heartbeat is unchanged for 8.997689753s, waiting for 1m20s Apr 29 23:25:42.668: INFO: node status heartbeat is unchanged for 9.996054775s, waiting for 1m20s Apr 29 23:25:43.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:25:43.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:25:44.671: INFO: node status heartbeat is unchanged for 1.002248695s, waiting for 1m20s Apr 29 23:25:45.668: INFO: node status heartbeat is unchanged for 1.99971396s, waiting for 1m20s Apr 29 23:25:46.672: INFO: node status heartbeat is unchanged for 3.002896362s, waiting for 1m20s Apr 29 23:25:47.671: INFO: node status heartbeat is unchanged for 4.002349229s, waiting for 1m20s Apr 29 23:25:48.668: INFO: node status heartbeat is unchanged for 4.999020162s, waiting for 1m20s Apr 29 23:25:49.668: INFO: node status heartbeat is unchanged for 5.999572958s, waiting for 1m20s Apr 29 23:25:50.671: INFO: node status heartbeat is unchanged for 7.001975516s, waiting for 1m20s Apr 29 23:25:51.672: INFO: node status heartbeat is unchanged for 8.003037735s, waiting for 1m20s Apr 29 23:25:52.669: INFO: node status heartbeat is unchanged for 9.000625595s, waiting for 1m20s Apr 29 23:25:53.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:25:53.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:52 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:25:54.671: INFO: node status heartbeat is unchanged for 1.001549313s, waiting for 1m20s Apr 29 23:25:55.671: INFO: node status heartbeat is unchanged for 2.001859444s, waiting for 1m20s Apr 29 23:25:56.672: INFO: node status heartbeat is unchanged for 3.002495322s, waiting for 1m20s Apr 29 23:25:57.669: INFO: node status heartbeat is unchanged for 3.999528714s, waiting for 1m20s Apr 29 23:25:58.669: INFO: node status heartbeat is unchanged for 4.999446023s, waiting for 1m20s Apr 29 23:25:59.670: INFO: node status heartbeat is unchanged for 6.001125651s, waiting for 1m20s Apr 29 23:26:00.670: INFO: node status heartbeat is unchanged for 7.000953369s, waiting for 1m20s Apr 29 23:26:01.668: INFO: node status heartbeat is unchanged for 7.999087477s, waiting for 1m20s Apr 29 23:26:02.672: INFO: node status heartbeat is unchanged for 9.00244511s, waiting for 1m20s Apr 29 23:26:03.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:26:03.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:25:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:02 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:26:04.670: INFO: node status heartbeat is unchanged for 1.001790517s, waiting for 1m20s Apr 29 23:26:05.667: INFO: node status heartbeat is unchanged for 1.999058573s, waiting for 1m20s Apr 29 23:26:06.671: INFO: node status heartbeat is unchanged for 3.003032061s, waiting for 1m20s Apr 29 23:26:07.669: INFO: node status heartbeat is unchanged for 4.000500726s, waiting for 1m20s Apr 29 23:26:08.668: INFO: node status heartbeat is unchanged for 4.999153395s, waiting for 1m20s Apr 29 23:26:09.669: INFO: node status heartbeat is unchanged for 6.000452292s, waiting for 1m20s Apr 29 23:26:10.669: INFO: node status heartbeat is unchanged for 7.0009226s, waiting for 1m20s Apr 29 23:26:11.669: INFO: node status heartbeat is unchanged for 8.00061944s, waiting for 1m20s Apr 29 23:26:12.669: INFO: node status heartbeat is unchanged for 9.000644577s, waiting for 1m20s Apr 29 23:26:13.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:26:13.672: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:12 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:26:14.670: INFO: node status heartbeat is unchanged for 1.002112648s, waiting for 1m20s Apr 29 23:26:15.668: INFO: node status heartbeat is unchanged for 2.000641599s, waiting for 1m20s Apr 29 23:26:16.668: INFO: node status heartbeat is unchanged for 3.000809627s, waiting for 1m20s Apr 29 23:26:17.669: INFO: node status heartbeat is unchanged for 4.001478551s, waiting for 1m20s Apr 29 23:26:18.668: INFO: node status heartbeat is unchanged for 5.00077769s, waiting for 1m20s Apr 29 23:26:19.668: INFO: node status heartbeat is unchanged for 6.00045052s, waiting for 1m20s Apr 29 23:26:20.670: INFO: node status heartbeat is unchanged for 7.00230696s, waiting for 1m20s Apr 29 23:26:21.668: INFO: node status heartbeat is unchanged for 8.000983875s, waiting for 1m20s Apr 29 23:26:22.671: INFO: node status heartbeat is unchanged for 9.003276741s, waiting for 1m20s Apr 29 23:26:23.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:26:23.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:22 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:26:24.670: INFO: node status heartbeat is unchanged for 1.001244563s, waiting for 1m20s Apr 29 23:26:25.669: INFO: node status heartbeat is unchanged for 2.000387829s, waiting for 1m20s Apr 29 23:26:26.668: INFO: node status heartbeat is unchanged for 2.99931454s, waiting for 1m20s Apr 29 23:26:27.668: INFO: node status heartbeat is unchanged for 3.998506763s, waiting for 1m20s Apr 29 23:26:28.668: INFO: node status heartbeat is unchanged for 4.999094748s, waiting for 1m20s Apr 29 23:26:29.669: INFO: node status heartbeat is unchanged for 5.999561594s, waiting for 1m20s Apr 29 23:26:30.671: INFO: node status heartbeat is unchanged for 7.001759413s, waiting for 1m20s Apr 29 23:26:31.668: INFO: node status heartbeat is unchanged for 7.998724417s, waiting for 1m20s Apr 29 23:26:32.670: INFO: node status heartbeat is unchanged for 9.000944332s, waiting for 1m20s Apr 29 23:26:33.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:26:33.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:22 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:32 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:26:34.670: INFO: node status heartbeat is unchanged for 1.001291475s, waiting for 1m20s Apr 29 23:26:35.669: INFO: node status heartbeat is unchanged for 2.000312666s, waiting for 1m20s Apr 29 23:26:36.669: INFO: node status heartbeat is unchanged for 3.00029076s, waiting for 1m20s Apr 29 23:26:37.668: INFO: node status heartbeat is unchanged for 3.999435477s, waiting for 1m20s Apr 29 23:26:38.667: INFO: node status heartbeat is unchanged for 4.998823199s, waiting for 1m20s Apr 29 23:26:39.671: INFO: node status heartbeat is unchanged for 6.002433387s, waiting for 1m20s Apr 29 23:26:40.670: INFO: node status heartbeat is unchanged for 7.0012598s, waiting for 1m20s Apr 29 23:26:41.668: INFO: node status heartbeat is unchanged for 7.999831422s, waiting for 1m20s Apr 29 23:26:42.670: INFO: node status heartbeat is unchanged for 9.001075803s, waiting for 1m20s Apr 29 23:26:43.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:26:43.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:42 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:26:44.671: INFO: node status heartbeat is unchanged for 1.002420722s, waiting for 1m20s Apr 29 23:26:45.670: INFO: node status heartbeat is unchanged for 2.0017568s, waiting for 1m20s Apr 29 23:26:46.671: INFO: node status heartbeat is unchanged for 3.002748434s, waiting for 1m20s Apr 29 23:26:47.669: INFO: node status heartbeat is unchanged for 4.000499464s, waiting for 1m20s Apr 29 23:26:48.668: INFO: node status heartbeat is unchanged for 5.000025437s, waiting for 1m20s Apr 29 23:26:49.668: INFO: node status heartbeat is unchanged for 5.999835156s, waiting for 1m20s Apr 29 23:26:50.670: INFO: node status heartbeat is unchanged for 7.001934129s, waiting for 1m20s Apr 29 23:26:51.671: INFO: node status heartbeat is unchanged for 8.002060035s, waiting for 1m20s Apr 29 23:26:52.671: INFO: node status heartbeat is unchanged for 9.002800284s, waiting for 1m20s Apr 29 23:26:53.669: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Apr 29 23:26:53.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:26:54.670: INFO: node status heartbeat is unchanged for 1.001057279s, waiting for 1m20s Apr 29 23:26:55.670: INFO: node status heartbeat is unchanged for 2.00123763s, waiting for 1m20s Apr 29 23:26:56.669: INFO: node status heartbeat is unchanged for 3.000338506s, waiting for 1m20s Apr 29 23:26:57.668: INFO: node status heartbeat is unchanged for 3.999128386s, waiting for 1m20s Apr 29 23:26:58.668: INFO: node status heartbeat is unchanged for 4.999123526s, waiting for 1m20s Apr 29 23:26:59.668: INFO: node status heartbeat is unchanged for 5.99959891s, waiting for 1m20s Apr 29 23:27:00.668: INFO: node status heartbeat is unchanged for 6.999782181s, waiting for 1m20s Apr 29 23:27:01.668: INFO: node status heartbeat is unchanged for 7.999495s, waiting for 1m20s Apr 29 23:27:02.668: INFO: node status heartbeat is unchanged for 8.999296809s, waiting for 1m20s Apr 29 23:27:03.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:27:03.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:26:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:27:04.669: INFO: node status heartbeat is unchanged for 1.000412724s, waiting for 1m20s Apr 29 23:27:05.668: INFO: node status heartbeat is unchanged for 1.999804638s, waiting for 1m20s Apr 29 23:27:06.671: INFO: node status heartbeat is unchanged for 3.002775652s, waiting for 1m20s Apr 29 23:27:07.668: INFO: node status heartbeat is unchanged for 3.999913659s, waiting for 1m20s Apr 29 23:27:08.669: INFO: node status heartbeat is unchanged for 5.000382325s, waiting for 1m20s Apr 29 23:27:09.670: INFO: node status heartbeat is unchanged for 6.001489851s, waiting for 1m20s Apr 29 23:27:10.670: INFO: node status heartbeat is unchanged for 7.00177872s, waiting for 1m20s Apr 29 23:27:11.671: INFO: node status heartbeat is unchanged for 8.002543691s, waiting for 1m20s Apr 29 23:27:12.668: INFO: node status heartbeat is unchanged for 8.999636434s, waiting for 1m20s Apr 29 23:27:13.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:27:13.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:27:14.671: INFO: node status heartbeat is unchanged for 1.002801395s, waiting for 1m20s Apr 29 23:27:15.670: INFO: node status heartbeat is unchanged for 2.001104614s, waiting for 1m20s Apr 29 23:27:16.669: INFO: node status heartbeat is unchanged for 3.001045037s, waiting for 1m20s Apr 29 23:27:17.669: INFO: node status heartbeat is unchanged for 4.000765559s, waiting for 1m20s Apr 29 23:27:18.667: INFO: node status heartbeat is unchanged for 4.999038723s, waiting for 1m20s Apr 29 23:27:19.671: INFO: node status heartbeat is unchanged for 6.002569013s, waiting for 1m20s Apr 29 23:27:20.668: INFO: node status heartbeat is unchanged for 6.999052366s, waiting for 1m20s Apr 29 23:27:21.668: INFO: node status heartbeat is unchanged for 7.999717116s, waiting for 1m20s Apr 29 23:27:22.669: INFO: node status heartbeat is unchanged for 9.000674668s, waiting for 1m20s Apr 29 23:27:23.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:27:23.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:27:24.669: INFO: node status heartbeat is unchanged for 1.000477752s, waiting for 1m20s Apr 29 23:27:25.667: INFO: node status heartbeat is unchanged for 1.99921634s, waiting for 1m20s Apr 29 23:27:26.670: INFO: node status heartbeat is unchanged for 3.002031605s, waiting for 1m20s Apr 29 23:27:27.669: INFO: node status heartbeat is unchanged for 4.000450562s, waiting for 1m20s Apr 29 23:27:28.668: INFO: node status heartbeat is unchanged for 5.0001438s, waiting for 1m20s Apr 29 23:27:29.670: INFO: node status heartbeat is unchanged for 6.001828963s, waiting for 1m20s Apr 29 23:27:30.669: INFO: node status heartbeat is unchanged for 7.000465577s, waiting for 1m20s Apr 29 23:27:31.670: INFO: node status heartbeat is unchanged for 8.001431773s, waiting for 1m20s Apr 29 23:27:32.669: INFO: node status heartbeat is unchanged for 9.001232685s, waiting for 1m20s Apr 29 23:27:33.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:27:33.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:27:34.670: INFO: node status heartbeat is unchanged for 1.002530329s, waiting for 1m20s Apr 29 23:27:35.669: INFO: node status heartbeat is unchanged for 2.001185241s, waiting for 1m20s Apr 29 23:27:36.670: INFO: node status heartbeat is unchanged for 3.001808338s, waiting for 1m20s Apr 29 23:27:37.670: INFO: node status heartbeat is unchanged for 4.002497365s, waiting for 1m20s Apr 29 23:27:38.670: INFO: node status heartbeat is unchanged for 5.002151054s, waiting for 1m20s Apr 29 23:27:39.671: INFO: node status heartbeat is unchanged for 6.002763729s, waiting for 1m20s Apr 29 23:27:40.670: INFO: node status heartbeat is unchanged for 7.001856707s, waiting for 1m20s Apr 29 23:27:41.668: INFO: node status heartbeat is unchanged for 8.000114268s, waiting for 1m20s Apr 29 23:27:42.672: INFO: node status heartbeat is unchanged for 9.004138964s, waiting for 1m20s Apr 29 23:27:43.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:27:43.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:43 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:27:44.670: INFO: node status heartbeat is unchanged for 1.002438146s, waiting for 1m20s Apr 29 23:27:45.670: INFO: node status heartbeat is unchanged for 2.002277352s, waiting for 1m20s Apr 29 23:27:46.668: INFO: node status heartbeat is unchanged for 2.999781678s, waiting for 1m20s Apr 29 23:27:47.669: INFO: node status heartbeat is unchanged for 4.001116807s, waiting for 1m20s Apr 29 23:27:48.670: INFO: node status heartbeat is unchanged for 5.001908561s, waiting for 1m20s Apr 29 23:27:49.669: INFO: node status heartbeat is unchanged for 6.001409179s, waiting for 1m20s Apr 29 23:27:50.669: INFO: node status heartbeat is unchanged for 7.001589951s, waiting for 1m20s Apr 29 23:27:51.669: INFO: node status heartbeat is unchanged for 8.000818104s, waiting for 1m20s Apr 29 23:27:52.670: INFO: node status heartbeat is unchanged for 9.002401758s, waiting for 1m20s Apr 29 23:27:53.703: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:27:53.708: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:53 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:27:54.670: INFO: node status heartbeat is unchanged for 967.704819ms, waiting for 1m20s Apr 29 23:27:55.670: INFO: node status heartbeat is unchanged for 1.967670059s, waiting for 1m20s Apr 29 23:27:56.671: INFO: node status heartbeat is unchanged for 2.968420779s, waiting for 1m20s Apr 29 23:27:57.668: INFO: node status heartbeat is unchanged for 3.96547262s, waiting for 1m20s Apr 29 23:27:58.669: INFO: node status heartbeat is unchanged for 4.966153995s, waiting for 1m20s Apr 29 23:27:59.668: INFO: node status heartbeat is unchanged for 5.965592732s, waiting for 1m20s Apr 29 23:28:00.669: INFO: node status heartbeat is unchanged for 6.966377774s, waiting for 1m20s Apr 29 23:28:01.670: INFO: node status heartbeat is unchanged for 7.966847282s, waiting for 1m20s Apr 29 23:28:02.670: INFO: node status heartbeat is unchanged for 8.967007707s, waiting for 1m20s Apr 29 23:28:03.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:28:03.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:27:53 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:03 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:28:04.670: INFO: node status heartbeat is unchanged for 1.000895357s, waiting for 1m20s Apr 29 23:28:05.669: INFO: node status heartbeat is unchanged for 2.000312859s, waiting for 1m20s Apr 29 23:28:06.670: INFO: node status heartbeat is unchanged for 3.00158787s, waiting for 1m20s Apr 29 23:28:07.672: INFO: node status heartbeat is unchanged for 4.003094732s, waiting for 1m20s Apr 29 23:28:08.668: INFO: node status heartbeat is unchanged for 4.999236928s, waiting for 1m20s Apr 29 23:28:09.670: INFO: node status heartbeat is unchanged for 6.001251129s, waiting for 1m20s Apr 29 23:28:10.668: INFO: node status heartbeat is unchanged for 6.998694027s, waiting for 1m20s Apr 29 23:28:11.672: INFO: node status heartbeat is unchanged for 8.002893893s, waiting for 1m20s Apr 29 23:28:12.667: INFO: node status heartbeat is unchanged for 8.998446304s, waiting for 1m20s Apr 29 23:28:13.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:28:13.673: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:03 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:13 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:28:14.669: INFO: node status heartbeat is unchanged for 1.001012101s, waiting for 1m20s Apr 29 23:28:15.670: INFO: node status heartbeat is unchanged for 2.002179351s, waiting for 1m20s Apr 29 23:28:16.671: INFO: node status heartbeat is unchanged for 3.002905497s, waiting for 1m20s Apr 29 23:28:17.670: INFO: node status heartbeat is unchanged for 4.00136689s, waiting for 1m20s Apr 29 23:28:18.668: INFO: node status heartbeat is unchanged for 4.999545338s, waiting for 1m20s Apr 29 23:28:19.671: INFO: node status heartbeat is unchanged for 6.003214149s, waiting for 1m20s Apr 29 23:28:20.671: INFO: node status heartbeat is unchanged for 7.002478082s, waiting for 1m20s Apr 29 23:28:21.670: INFO: node status heartbeat is unchanged for 8.001641666s, waiting for 1m20s Apr 29 23:28:22.669: INFO: node status heartbeat is unchanged for 9.000265559s, waiting for 1m20s Apr 29 23:28:23.669: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:28:23.674: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:13 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:23 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:28:24.670: INFO: node status heartbeat is unchanged for 1.001217698s, waiting for 1m20s Apr 29 23:28:25.669: INFO: node status heartbeat is unchanged for 2.000223589s, waiting for 1m20s Apr 29 23:28:26.669: INFO: node status heartbeat is unchanged for 3.00024074s, waiting for 1m20s Apr 29 23:28:27.669: INFO: node status heartbeat is unchanged for 3.99990105s, waiting for 1m20s Apr 29 23:28:28.669: INFO: node status heartbeat is unchanged for 4.999587236s, waiting for 1m20s Apr 29 23:28:29.670: INFO: node status heartbeat is unchanged for 6.000558408s, waiting for 1m20s Apr 29 23:28:30.669: INFO: node status heartbeat is unchanged for 6.999915659s, waiting for 1m20s Apr 29 23:28:31.669: INFO: node status heartbeat is unchanged for 8.000287344s, waiting for 1m20s Apr 29 23:28:32.668: INFO: node status heartbeat is unchanged for 8.999076444s, waiting for 1m20s Apr 29 23:28:33.668: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Apr 29 23:28:33.672: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-04-29 20:02:57 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2022-04-29 23:28:33 +0000 UTC"},    LastTransitionTime: {Time: s"2022-04-29 19:59:05 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-04-29 20:00:14 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Apr 29 23:28:34.667: INFO: node status heartbeat is unchanged for 999.009554ms, waiting for 1m20s Apr 29 23:28:34.670: INFO: node status heartbeat is unchanged for 1.002290601s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:28:34.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5023" for this suite. • [SLOW TEST:300.048 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":169,"failed":0} Apr 29 23:28:34.687: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:25:25.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-984ba71f-868a-4d0e-8b10-7f0765c4f8bf in namespace container-probe-6188 Apr 29 23:25:34.004: INFO: Started pod liveness-984ba71f-868a-4d0e-8b10-7f0765c4f8bf in namespace container-probe-6188 STEP: checking the pod's current state and verifying that restartCount is present Apr 29 23:25:34.006: INFO: Initial restart count of pod liveness-984ba71f-868a-4d0e-8b10-7f0765c4f8bf is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:29:34.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6188" for this suite. • [SLOW TEST:248.639 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":7,"skipped":806,"failed":0} Apr 29 23:29:34.601: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":2,"skipped":119,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:44.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Apr 29 23:23:44.950: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:46.954: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:48.955: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:50.954: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Apr 29 23:25:48.179: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-04-29 23:24:54 +0000 UTC restartedAt=2022-04-29 23:25:46 +0000 UTC (52s) STEP: getting restart delay-1 Apr 29 23:27:18.607: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-04-29 23:25:51 +0000 UTC restartedAt=2022-04-29 23:27:17 +0000 UTC (1m26s) STEP: getting restart delay-2 Apr 29 23:30:06.324: INFO: getRestartDelay: restartCount = 6, finishedAt=2022-04-29 23:27:22 +0000 UTC restartedAt=2022-04-29 23:30:04 +0000 UTC (2m42s) STEP: updating the image Apr 29 23:30:06.833: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Apr 29 23:30:28.898: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-04-29 23:30:15 +0000 UTC restartedAt=2022-04-29 23:30:27 +0000 UTC (12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:30:28.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2760" for this suite. • [SLOW TEST:403.990 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":3,"skipped":119,"failed":0} Apr 29 23:30:28.910: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Apr 29 23:23:34.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Apr 29 23:23:34.746: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:36.750: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:38.750: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:40.751: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:42.750: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Apr 29 23:23:44.751: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Apr 29 23:34:58.095: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-04-29 23:29:54 +0000 UTC restartedAt=2022-04-29 23:34:56 +0000 UTC (5m2s) Apr 29 23:40:11.550: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-04-29 23:35:01 +0000 UTC restartedAt=2022-04-29 23:40:10 +0000 UTC (5m9s) Apr 29 23:45:20.969: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-04-29 23:40:15 +0000 UTC restartedAt=2022-04-29 23:45:19 +0000 UTC (5m4s) STEP: getting restart delay after a capped delay Apr 29 23:50:31.341: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-04-29 23:45:24 +0000 UTC restartedAt=2022-04-29 23:50:29 +0000 UTC (5m5s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 29 23:50:31.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5871" for this suite. • [SLOW TEST:1616.639 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":227,"failed":0} Apr 29 23:50:31.352: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":10,"skipped":1433,"failed":0} Apr 29 23:25:30.821: INFO: Running AfterSuite actions on all nodes Apr 29 23:50:31.376: INFO: Running AfterSuite actions on node 1 Apr 29 23:50:31.376: INFO: Skipping dumping logs from cluster Ran 53 of 5773 Specs in 1631.563 seconds SUCCESS! -- 53 Passed | 0 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m13.140586476s Test Suite Passed