Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634961555 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 23 03:59:16.964: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:16.967: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 03:59:16.997: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 03:59:17.051: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 03:59:17.051: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 03:59:17.051: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 03:59:17.051: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 03:59:17.051: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 03:59:17.060: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 03:59:17.060: INFO: e2e test version: v1.21.5 Oct 23 03:59:17.061: INFO: kube-apiserver version: v1.21.1 Oct 23 03:59:17.061: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.066: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.076: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.096: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.076: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.096: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.079: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.101: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.085: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.106: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.096: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.117: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.097: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.119: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.103: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.124: INFO: Cluster IP family: ipv4 Oct 23 03:59:17.104: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.127: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 03:59:17.133: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:17.155: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap W1023 03:59:17.718719 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.718: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.721: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-7853/configmap-test-1e1ad5df-ab23-426a-a20d-17ce848f6e89 STEP: Updating configMap configmap-7853/configmap-test-1e1ad5df-ab23-426a-a20d-17ce848f6e89 STEP: Verifying update of ConfigMap configmap-7853/configmap-test-1e1ad5df-ab23-426a-a20d-17ce848f6e89 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:17.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7853" for this suite. •SSS ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":1,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1023 03:59:17.124744 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.124: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.127: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 03:59:17.141: INFO: Waiting up to 5m0s for pod "security-context-78367bca-154e-473c-9ea2-723f831d282d" in namespace "security-context-4489" to be "Succeeded or Failed" Oct 23 03:59:17.143: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d": Phase="Pending", Reason="", readiness=false. Elapsed: 1.839292ms Oct 23 03:59:19.147: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005685851s Oct 23 03:59:21.180: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039254324s Oct 23 03:59:23.187: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045992377s Oct 23 03:59:25.194: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052718382s Oct 23 03:59:27.197: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056124991s STEP: Saw pod success Oct 23 03:59:27.197: INFO: Pod "security-context-78367bca-154e-473c-9ea2-723f831d282d" satisfied condition "Succeeded or Failed" Oct 23 03:59:27.199: INFO: Trying to get logs from node node2 pod security-context-78367bca-154e-473c-9ea2-723f831d282d container test-container: STEP: delete the pod Oct 23 03:59:27.218: INFO: Waiting for pod security-context-78367bca-154e-473c-9ea2-723f831d282d to disappear Oct 23 03:59:27.220: INFO: Pod security-context-78367bca-154e-473c-9ea2-723f831d282d no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:27.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4489" for this suite. • [SLOW TEST:10.136 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1023 03:59:17.159981 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.160: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.161: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 03:59:17.175: INFO: Waiting up to 5m0s for pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e" in namespace "security-context-8352" to be "Succeeded or Failed" Oct 23 03:59:17.177: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.952593ms Oct 23 03:59:19.182: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006759722s Oct 23 03:59:21.185: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010195348s Oct 23 03:59:23.188: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013613491s Oct 23 03:59:25.192: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.017715831s Oct 23 03:59:27.196: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021339412s STEP: Saw pod success Oct 23 03:59:27.196: INFO: Pod "security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e" satisfied condition "Succeeded or Failed" Oct 23 03:59:27.198: INFO: Trying to get logs from node node2 pod security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e container test-container: STEP: delete the pod Oct 23 03:59:27.217: INFO: Waiting for pod security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e to disappear Oct 23 03:59:27.219: INFO: Pod security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:27.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8352" for this suite. • [SLOW TEST:10.089 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":14,"failed":0} S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":1,"skipped":6,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1023 03:59:17.212754 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.212: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.214: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:27.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-767" for this suite. • [SLOW TEST:10.054 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:27.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Oct 23 03:59:27.418: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:27.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-5671" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:18.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1023 03:59:26.149333 32 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 105 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc0017e4f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00396c5c0, 0xc0017e4f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0008e5200, 0xc00396c5c0, 0xc0039713e0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0008e5200, 0xc00396c5c0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0008e5200, 0xc00396c5c0, 0xc0008e5200, 0xc00396c5c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00396c5c0, 0x14, 0xc00504acc0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc0011011e0, 0xc000022840, 0x14, 0xc00504acc0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc001177020, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001177020, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0011a02a0, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002f482d0, 0x0, 0x768f9a0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002f482d0, 0x768f9a0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003b0c000, 0xc002f482d0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003b0c000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003b0c000, 0xc003afe030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7fe37189eae8, 0xc001802d80, 0x6f05d9d, 0x14, 0xc000d48210, 0x3, 0x3, 0x7745ab8, 0xc000190840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc001802d80, 0x6f05d9d, 0x14, 0xc00187c340, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc001802d80, 0x6f05d9d, 0x14, 0xc001f5c500, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001802d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001802d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001802d80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-7233". STEP: Found 2 events. Oct 23 03:59:26.152: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-30687f8a-c9b7-4f8f-ae08-6621a3be51ae: { } Scheduled: Successfully assigned container-probe-7233/startup-30687f8a-c9b7-4f8f-ae08-6621a3be51ae to node2 Oct 23 03:59:26.152: INFO: At 2021-10-23 03:59:26 +0000 UTC - event for startup-30687f8a-c9b7-4f8f-ae08-6621a3be51ae: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Oct 23 03:59:26.155: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 03:59:26.155: INFO: startup-30687f8a-c9b7-4f8f-ae08-6621a3be51ae node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:59:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:59:18 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:59:18 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 03:59:18 +0000 UTC }] Oct 23 03:59:26.155: INFO: Oct 23 03:59:26.159: INFO: Logging node info for node master1 Oct 23 03:59:26.162: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 147639 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:22 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:22 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:22 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:59:22 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 03:59:26.163: INFO: Logging kubelet events for node master1 Oct 23 03:59:26.165: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 03:59:26.191: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 03:59:26.191: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Init container install-cni ready: true, restart count 1 Oct 23 03:59:26.191: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 03:59:26.191: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Container kube-multus ready: true, restart count 1 Oct 23 03:59:26.191: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Container coredns ready: true, restart count 2 Oct 23 03:59:26.191: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.191: INFO: Container docker-registry ready: true, restart count 0 Oct 23 03:59:26.191: INFO: Container nginx ready: true, restart count 0 Oct 23 03:59:26.191: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.191: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 03:59:26.191: INFO: Container node-exporter ready: true, restart count 0 Oct 23 03:59:26.191: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 03:59:26.191: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 03:59:26.191: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.191: INFO: Container kube-scheduler ready: true, restart count 0 W1023 03:59:26.206540 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 03:59:26.286: INFO: Latency metrics for node master1 Oct 23 03:59:26.286: INFO: Logging node info for node master2 Oct 23 03:59:26.289: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 147678 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:25 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:25 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:25 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:59:25 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 03:59:26.289: INFO: Logging kubelet events for node master2 Oct 23 03:59:26.292: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 03:59:26.305: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.305: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 03:59:26.305: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.305: INFO: Container autoscaler ready: true, restart count 1 Oct 23 03:59:26.305: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.305: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 03:59:26.305: INFO: Container node-exporter ready: true, restart count 0 Oct 23 03:59:26.305: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.305: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 03:59:26.305: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.305: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 03:59:26.306: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.306: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 03:59:26.306: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 03:59:26.306: INFO: Init container install-cni ready: true, restart count 2 Oct 23 03:59:26.306: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 03:59:26.306: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.306: INFO: Container kube-multus ready: true, restart count 1 W1023 03:59:26.319102 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 03:59:26.584: INFO: Latency metrics for node master2 Oct 23 03:59:26.584: INFO: Logging node info for node master3 Oct 23 03:59:26.587: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 147626 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:20 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:20 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:20 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:59:20 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 03:59:26.587: INFO: Logging kubelet events for node master3 Oct 23 03:59:26.589: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 03:59:26.603: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 03:59:26.603: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 03:59:26.603: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 03:59:26.603: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Init container install-cni ready: true, restart count 1 Oct 23 03:59:26.603: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 03:59:26.603: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 03:59:26.603: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.603: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 03:59:26.603: INFO: Container node-exporter ready: true, restart count 0 Oct 23 03:59:26.603: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 03:59:26.603: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container kube-multus ready: true, restart count 1 Oct 23 03:59:26.603: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.603: INFO: Container coredns ready: true, restart count 2 W1023 03:59:26.617995 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 03:59:26.687: INFO: Latency metrics for node master3 Oct 23 03:59:26.687: INFO: Logging node info for node node1 Oct 23 03:59:26.690: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 147523 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 02:09:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:17 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:17 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:17 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:59:17 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 03:59:26.691: INFO: Logging kubelet events for node node1 Oct 23 03:59:26.694: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 03:59:26.715: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 03:59:26.715: INFO: Init container install-cni ready: true, restart count 2 Oct 23 03:59:26.715: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 03:59:26.715: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 03:59:26.716: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container node-exporter ready: true, restart count 0 Oct 23 03:59:26.716: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 03:59:26.716: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 03:59:26.716: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 03:59:26.716: INFO: Container config-reloader ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container grafana ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container prometheus ready: true, restart count 1 Oct 23 03:59:26.716: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 03:59:26.716: INFO: Container collectd ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 03:59:26.716: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 03:59:26.716: INFO: privileged-pod started at 2021-10-23 03:59:17 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.716: INFO: Container not-privileged-container ready: false, restart count 0 Oct 23 03:59:26.716: INFO: Container privileged-container ready: false, restart count 0 Oct 23 03:59:26.716: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 03:59:26.716: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container kube-multus ready: true, restart count 1 Oct 23 03:59:26.716: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 03:59:26.716: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 03:59:26.716: INFO: Container discover ready: false, restart count 0 Oct 23 03:59:26.716: INFO: Container init ready: false, restart count 0 Oct 23 03:59:26.716: INFO: Container install ready: false, restart count 0 Oct 23 03:59:26.716: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:26.716: INFO: Container nodereport ready: true, restart count 0 Oct 23 03:59:26.716: INFO: Container reconcile ready: true, restart count 0 Oct 23 03:59:26.716: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 03:59:26.716: INFO: explicit-root-uid started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:26.716: INFO: Container explicit-root-uid ready: false, restart count 0 W1023 03:59:26.729393 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 03:59:26.976: INFO: Latency metrics for node node1 Oct 23 03:59:26.976: INFO: Logging node info for node node2 Oct 23 03:59:26.980: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 147620 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 02:09:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:18 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:18 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 03:59:18 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 03:59:18 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 03:59:26.980: INFO: Logging kubelet events for node node2 Oct 23 03:59:26.983: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 03:59:27.019: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Init container install-cni ready: true, restart count 1 Oct 23 03:59:27.019: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 03:59:27.019: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container tas-extender ready: true, restart count 0 Oct 23 03:59:27.019: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 03:59:27.019: INFO: Container collectd ready: true, restart count 0 Oct 23 03:59:27.019: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 03:59:27.019: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 03:59:27.019: INFO: back-off-cap started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container back-off-cap ready: false, restart count 0 Oct 23 03:59:27.019: INFO: startup-override-145a6a2c-1511-49b3-b733-1b32cce7455f started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container agnhost-container ready: false, restart count 0 Oct 23 03:59:27.019: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 03:59:27.019: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:27.019: INFO: Container nodereport ready: true, restart count 1 Oct 23 03:59:27.019: INFO: Container reconcile ready: true, restart count 0 Oct 23 03:59:27.019: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 03:59:27.019: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 03:59:27.019: INFO: Container discover ready: false, restart count 0 Oct 23 03:59:27.019: INFO: Container init ready: false, restart count 0 Oct 23 03:59:27.019: INFO: Container install ready: false, restart count 0 Oct 23 03:59:27.019: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 03:59:27.019: INFO: startup-0e31c0ab-b23a-468f-aaec-4f5838fa390a started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container busybox ready: false, restart count 0 Oct 23 03:59:27.019: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container kube-multus ready: true, restart count 1 Oct 23 03:59:27.019: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 03:59:27.019: INFO: security-context-481e2837-1d7e-480d-b8df-b086c0e43302 started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container test-container ready: false, restart count 0 Oct 23 03:59:27.019: INFO: startup-30687f8a-c9b7-4f8f-ae08-6621a3be51ae started at 2021-10-23 03:59:18 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container busybox ready: false, restart count 0 Oct 23 03:59:27.019: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 03:59:27.019: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 03:59:27.019: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 03:59:27.019: INFO: Container node-exporter ready: true, restart count 0 Oct 23 03:59:27.019: INFO: security-context-78367bca-154e-473c-9ea2-723f831d282d started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container test-container ready: false, restart count 0 Oct 23 03:59:27.019: INFO: security-context-f4557be0-8f2c-4023-94ee-40d89c05dd5e started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container test-container ready: false, restart count 0 Oct 23 03:59:27.019: INFO: busybox-ed6f7fd2-fe3a-48ca-b024-2891afe3dcd7 started at 2021-10-23 03:59:17 +0000 UTC (0+1 container statuses recorded) Oct 23 03:59:27.019: INFO: Container busybox ready: false, restart count 0 W1023 03:59:27.032262 32 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 03:59:27.650: INFO: Latency metrics for node node2 Oct 23 03:59:27.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7233" for this suite. •! Panic [9.552 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc0017e4f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc00396c5c0, 0xc0017e4f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0008e5200, 0xc00396c5c0, 0xc0039713e0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0008e5200, 0xc00396c5c0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0008e5200, 0xc00396c5c0, 0xc0008e5200, 0xc00396c5c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc00396c5c0, 0x14, 0xc00504acc0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc0011011e0, 0xc000022840, 0x14, 0xc00504acc0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001802d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001802d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001802d80, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod W1023 03:59:17.242584 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.242: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.244: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Oct 23 03:59:17.260: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:19.264: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:21.264: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:23.263: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:25.265: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:27.264: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:29.265: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Oct 23 03:59:29.268: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1217 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 03:59:29.268: INFO: >>> kubeConfig: /root/.kube/config Oct 23 03:59:29.734: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-1217 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 03:59:29.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Oct 23 03:59:29.832: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-1217 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 03:59:29.832: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:29.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-1217" for this suite. • [SLOW TEST:12.697 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":44,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:27.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:30.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9149" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":2,"skipped":543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:31.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:31.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-2470" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1023 03:59:17.441681 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.441: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.443: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 03:59:17.459: INFO: Waiting up to 5m0s for pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302" in namespace "security-context-4137" to be "Succeeded or Failed" Oct 23 03:59:17.460: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 1.862262ms Oct 23 03:59:19.464: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005460432s Oct 23 03:59:21.468: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009299011s Oct 23 03:59:23.473: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014790256s Oct 23 03:59:25.478: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019793445s Oct 23 03:59:27.482: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022965982s Oct 23 03:59:29.486: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027676853s Oct 23 03:59:31.489: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.030837998s STEP: Saw pod success Oct 23 03:59:31.489: INFO: Pod "security-context-481e2837-1d7e-480d-b8df-b086c0e43302" satisfied condition "Succeeded or Failed" Oct 23 03:59:31.492: INFO: Trying to get logs from node node2 pod security-context-481e2837-1d7e-480d-b8df-b086c0e43302 container test-container: STEP: delete the pod Oct 23 03:59:31.572: INFO: Waiting for pod security-context-481e2837-1d7e-480d-b8df-b086c0e43302 to disappear Oct 23 03:59:31.574: INFO: Pod security-context-481e2837-1d7e-480d-b8df-b086c0e43302 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:31.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4137" for this suite. • [SLOW TEST:14.164 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":118,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1023 03:59:17.789753 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.789: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.791: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-145a6a2c-1511-49b3-b733-1b32cce7455f in namespace container-probe-3110 Oct 23 03:59:27.812: INFO: Started pod startup-override-145a6a2c-1511-49b3-b733-1b32cce7455f in namespace container-probe-3110 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 03:59:27.814: INFO: Initial restart count of pod startup-override-145a6a2c-1511-49b3-b733-1b32cce7455f is 0 Oct 23 03:59:31.825: INFO: Restart count of pod container-probe-3110/startup-override-145a6a2c-1511-49b3-b733-1b32cce7455f is now 1 (4.010778624s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:31.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3110" for this suite. • [SLOW TEST:14.075 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:27.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 03:59:27.548: INFO: Waiting up to 5m0s for pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f" in namespace "security-context-3954" to be "Succeeded or Failed" Oct 23 03:59:27.552: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.655365ms Oct 23 03:59:29.556: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00796368s Oct 23 03:59:31.560: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011251923s Oct 23 03:59:33.564: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015321159s Oct 23 03:59:35.567: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01903088s Oct 23 03:59:37.570: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.021729305s STEP: Saw pod success Oct 23 03:59:37.570: INFO: Pod "security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f" satisfied condition "Succeeded or Failed" Oct 23 03:59:37.572: INFO: Trying to get logs from node node2 pod security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f container test-container: STEP: delete the pod Oct 23 03:59:37.596: INFO: Waiting for pod security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f to disappear Oct 23 03:59:37.598: INFO: Pod security-context-c23925f8-ff26-4f0d-adf6-6d86db3e055f no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:37.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3954" for this suite. • [SLOW TEST:10.091 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":2,"skipped":167,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:27.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Oct 23 03:59:27.564: INFO: Waiting up to 5m0s for pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82" in namespace "security-context-8014" to be "Succeeded or Failed" Oct 23 03:59:27.567: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.408231ms Oct 23 03:59:29.569: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005081016s Oct 23 03:59:31.572: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.007430388s Oct 23 03:59:33.576: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011205742s Oct 23 03:59:35.579: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015095841s Oct 23 03:59:37.582: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.017787711s STEP: Saw pod success Oct 23 03:59:37.582: INFO: Pod "security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82" satisfied condition "Succeeded or Failed" Oct 23 03:59:37.584: INFO: Trying to get logs from node node2 pod security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82 container test-container: STEP: delete the pod Oct 23 03:59:37.618: INFO: Waiting for pod security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82 to disappear Oct 23 03:59:37.619: INFO: Pod security-context-8c4e71b1-104d-4561-8637-b73f8ba35c82 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:37.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8014" for this suite. • [SLOW TEST:10.094 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":2,"skipped":141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:30.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Oct 23 03:59:30.416: INFO: Waiting up to 5m0s for pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4" in namespace "downward-api-5523" to be "Succeeded or Failed" Oct 23 03:59:30.418: INFO: Pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067095ms Oct 23 03:59:32.421: INFO: Pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005543455s Oct 23 03:59:34.426: INFO: Pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010609699s Oct 23 03:59:36.429: INFO: Pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01349286s Oct 23 03:59:38.433: INFO: Pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017500492s STEP: Saw pod success Oct 23 03:59:38.433: INFO: Pod "downward-api-7726c77b-9004-4716-98bd-074014a91df4" satisfied condition "Succeeded or Failed" Oct 23 03:59:38.436: INFO: Trying to get logs from node node2 pod downward-api-7726c77b-9004-4716-98bd-074014a91df4 container dapi-container: STEP: delete the pod Oct 23 03:59:38.450: INFO: Waiting for pod downward-api-7726c77b-9004-4716-98bd-074014a91df4 to disappear Oct 23 03:59:38.452: INFO: Pod downward-api-7726c77b-9004-4716-98bd-074014a91df4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:38.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5523" for this suite. • [SLOW TEST:8.077 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":2,"skipped":285,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:31.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Oct 23 03:59:31.609: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782" in namespace "security-context-test-2431" to be "Succeeded or Failed" Oct 23 03:59:31.612: INFO: Pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894819ms Oct 23 03:59:33.615: INFO: Pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006527916s Oct 23 03:59:35.618: INFO: Pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009553909s Oct 23 03:59:37.621: INFO: Pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011878165s Oct 23 03:59:39.625: INFO: Pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016430667s Oct 23 03:59:39.625: INFO: Pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782" satisfied condition "Succeeded or Failed" Oct 23 03:59:39.680: INFO: Got logs for pod "busybox-privileged-true-0ef80ef1-e344-47e2-a052-81ada229c782": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:39.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2431" for this suite. • [SLOW TEST:8.110 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":4,"skipped":857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:31.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Oct 23 03:59:31.917: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5811" to be "Succeeded or Failed" Oct 23 03:59:31.920: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7545ms Oct 23 03:59:33.924: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007139507s Oct 23 03:59:35.927: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010288306s Oct 23 03:59:37.930: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013091596s Oct 23 03:59:39.935: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017764591s Oct 23 03:59:39.935: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:39.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5811" for this suite. • [SLOW TEST:8.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:31.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Oct 23 03:59:41.075: INFO: start=2021-10-23 03:59:36.03815492 +0000 UTC m=+20.604108811, now=2021-10-23 03:59:41.075313289 +0000 UTC m=+25.641267297, kubelet pod: {"metadata":{"name":"pod-submit-remove-73d6415f-3353-4927-8da1-3c5eae3b3450","namespace":"pods-5342","uid":"0aaffdb6-61a8-4019-a427-f00d7d154d54","resourceVersion":"147864","creationTimestamp":"2021-10-23T03:59:32Z","deletionTimestamp":"2021-10-23T04:00:06Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"4894561"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.244\"\n ],\n \"mac\": \"5a:7a:7e:bd:0d:f9\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.244\"\n ],\n \"mac\": \"5a:7a:7e:bd:0d:f9\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-10-23T03:59:32.027323457Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-23T03:59:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-pmfwm","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-pmfwm","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T03:59:32Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T03:59:35Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T03:59:35Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T03:59:32Z"}],"hostIP":"10.10.190.207","podIP":"10.244.3.244","podIPs":[{"ip":"10.244.3.244"}],"startTime":"2021-10-23T03:59:32Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-10-23T03:59:34Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://cfd6195d289a34e714e4bf080167376c397e330120b5f9c2ed946a180cbdb2a0","started":true}],"qosClass":"BestEffort"}} Oct 23 03:59:46.062: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:46.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5342" for this suite. • [SLOW TEST:14.096 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":2,"skipped":324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:38.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:50.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8726" for this suite. • [SLOW TEST:12.100 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":3,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:40.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Oct 23 03:59:40.287: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2456" to be "Succeeded or Failed" Oct 23 03:59:40.292: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287095ms Oct 23 03:59:42.294: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006982918s Oct 23 03:59:44.299: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011833543s Oct 23 03:59:46.305: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017189765s Oct 23 03:59:48.310: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022128039s Oct 23 03:59:50.314: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02630246s Oct 23 03:59:52.317: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029506277s Oct 23 03:59:54.321: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.034011467s Oct 23 03:59:54.321: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:54.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2456" for this suite. • [SLOW TEST:14.215 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:46.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Oct 23 03:59:46.296: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e" in namespace "security-context-test-3602" to be "Succeeded or Failed" Oct 23 03:59:46.301: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.440385ms Oct 23 03:59:48.306: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009740987s Oct 23 03:59:50.313: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016587873s Oct 23 03:59:52.317: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020672043s Oct 23 03:59:54.324: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027329292s Oct 23 03:59:56.327: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e": Phase="Failed", Reason="", readiness=false. Elapsed: 10.030298962s Oct 23 03:59:56.327: INFO: Pod "busybox-readonly-true-228f4d16-068d-436e-8608-87d7db8e737e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:56.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3602" for this suite. • [SLOW TEST:10.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":414,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:54.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 03:59:56.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2838" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":4,"skipped":521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:56.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Oct 23 03:59:56.526: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829" in namespace "security-context-test-7872" to be "Succeeded or Failed" Oct 23 03:59:56.528: INFO: Pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829": Phase="Pending", Reason="", readiness=false. Elapsed: 1.83519ms Oct 23 03:59:58.533: INFO: Pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006746131s Oct 23 04:00:00.536: INFO: Pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00957107s Oct 23 04:00:02.542: INFO: Pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01535146s Oct 23 04:00:04.548: INFO: Pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021926492s Oct 23 04:00:04.548: INFO: Pod "alpine-nnp-true-28da5ba9-8744-4a65-a3b7-1709034f4829" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:04.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7872" for this suite. • [SLOW TEST:8.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:37.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true Oct 23 04:00:00.761: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false STEP: patching pod status with condition "k8s.io/test-condition1" to false Oct 23 04:00:02.772: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true Oct 23 04:00:03.771: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:04.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3441" for this suite. • [SLOW TEST:27.079 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":3,"skipped":213,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:56.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 03:59:56.944: INFO: Waiting up to 5m0s for pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e" in namespace "security-context-4926" to be "Succeeded or Failed" Oct 23 03:59:56.948: INFO: Pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.306545ms Oct 23 03:59:58.953: INFO: Pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009064272s Oct 23 04:00:00.955: INFO: Pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011648374s Oct 23 04:00:02.960: INFO: Pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016092201s Oct 23 04:00:04.964: INFO: Pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020255042s STEP: Saw pod success Oct 23 04:00:04.964: INFO: Pod "security-context-68b7248f-e596-450d-b305-7599703eeb4e" satisfied condition "Succeeded or Failed" Oct 23 04:00:04.966: INFO: Trying to get logs from node node2 pod security-context-68b7248f-e596-450d-b305-7599703eeb4e container test-container: STEP: delete the pod Oct 23 04:00:04.978: INFO: Waiting for pod security-context-68b7248f-e596-450d-b305-7599703eeb4e to disappear Oct 23 04:00:04.980: INFO: Pod security-context-68b7248f-e596-450d-b305-7599703eeb4e no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:04.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4926" for this suite. • [SLOW TEST:8.082 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":5,"skipped":677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:05.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Oct 23 04:00:05.070: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:05.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-8356" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:04.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:06.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-9126" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":4,"skipped":234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:07.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 23 04:00:07.162: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Oct 23 04:00:07.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5215 create -f -' Oct 23 04:00:07.684: INFO: stderr: "" Oct 23 04:00:07.684: INFO: stdout: "secret/test-secret created\n" Oct 23 04:00:07.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5215 create -f -' Oct 23 04:00:08.035: INFO: stderr: "" Oct 23 04:00:08.035: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Oct 23 04:00:16.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5215 logs secret-test-pod test-container' Oct 23 04:00:16.206: INFO: stderr: "" Oct 23 04:00:16.206: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:16.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5215" for this suite. • [SLOW TEST:9.087 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":5,"skipped":367,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:05.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:16.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4785" for this suite. • [SLOW TEST:11.096 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":6,"skipped":753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:39.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-a4bfe69b-7d67-4c0f-8391-f81171e7b353 in namespace container-probe-7202 Oct 23 03:59:55.843: INFO: Started pod liveness-override-a4bfe69b-7d67-4c0f-8391-f81171e7b353 in namespace container-probe-7202 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 03:59:55.846: INFO: Initial restart count of pod liveness-override-a4bfe69b-7d67-4c0f-8391-f81171e7b353 is 1 Oct 23 04:00:17.893: INFO: Restart count of pod container-probe-7202/liveness-override-a4bfe69b-7d67-4c0f-8391-f81171e7b353 is now 2 (22.047101018s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:17.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7202" for this suite. • [SLOW TEST:38.104 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":5,"skipped":912,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:17.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Oct 23 04:00:17.964: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:17.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-7660" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:37.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e in namespace kubelet-7077 I1023 03:59:38.060189 37 runners.go:190] Created replication controller with name: cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e, namespace: kubelet-7077, replica count: 20 I1023 03:59:48.111864 37 runners.go:190] cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e Pods: 20 out of 20 created, 4 running, 16 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 03:59:58.112768 37 runners.go:190] cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 03:59:59.113: INFO: Checking pods on node node2 via /runningpods endpoint Oct 23 03:59:59.113: INFO: Checking pods on node node1 via /runningpods endpoint Oct 23 03:59:59.145: INFO: Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.477 4081.98 1700.86 "runtime" 0.101 607.73 259.74 "kubelet" 0.101 607.73 259.74 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.102 555.88 248.10 "/" 0.411 3772.17 1580.89 "runtime" 0.102 555.88 248.10 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.950 6724.71 2494.14 "runtime" 0.714 2794.42 683.77 "kubelet" 0.714 2794.42 683.77 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 1.067 1698.77 631.73 "/" 1.803 4323.04 1270.05 "runtime" 1.067 1698.77 631.73 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.397 4936.67 1619.08 "runtime" 0.123 656.15 277.12 "kubelet" 0.123 656.15 277.12 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e in namespace kubelet-7077, will wait for the garbage collector to delete the pods Oct 23 03:59:59.202: INFO: Deleting ReplicationController cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e took: 4.742056ms Oct 23 03:59:59.803: INFO: Terminating ReplicationController cleanup20-f02046d7-e81c-426f-a5f3-8501d5cc0e7e pods took: 600.76156ms Oct 23 04:00:18.604: INFO: Checking pods on node node2 via /runningpods endpoint Oct 23 04:00:18.604: INFO: Checking pods on node node1 via /runningpods endpoint Oct 23 04:00:19.872: INFO: Deleting 20 pods on 2 nodes completed in 2.268798067s after the RC was deleted Oct 23 04:00:19.873: INFO: CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.411 0.411 0.529 0.529 0.529 "runtime" 0.000 0.000 0.102 0.102 0.102 0.102 0.102 "kubelet" 0.000 0.000 0.102 0.102 0.102 0.102 0.102 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.465 1.826 1.826 1.826 1.826 "runtime" 0.000 0.000 0.714 0.714 0.714 0.714 0.714 "kubelet" 0.000 0.000 0.714 0.714 0.714 0.714 0.714 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.348 1.803 1.803 1.803 1.803 "runtime" 0.000 0.000 1.001 1.001 1.001 1.001 1.001 "kubelet" 0.000 0.000 1.001 1.001 1.001 1.001 1.001 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.397 0.397 0.407 0.407 0.407 "runtime" 0.000 0.000 0.111 0.123 0.123 0.123 0.123 "kubelet" 0.000 0.000 0.111 0.123 0.123 0.123 0.123 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.489 0.489 0.808 0.808 0.808 "runtime" 0.000 0.000 0.101 0.105 0.105 0.105 0.105 "kubelet" 0.000 0.000 0.101 0.105 0.105 0.105 0.105 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:19.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-7077" for this suite. • [SLOW TEST:41.915 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":3,"skipped":318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1023 03:59:17.847563 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.847: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.849: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-ed6f7fd2-fe3a-48ca-b024-2891afe3dcd7 in namespace container-probe-139 Oct 23 03:59:27.867: INFO: Started pod busybox-ed6f7fd2-fe3a-48ca-b024-2891afe3dcd7 in namespace container-probe-139 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 03:59:27.869: INFO: Initial restart count of pod busybox-ed6f7fd2-fe3a-48ca-b024-2891afe3dcd7 is 0 Oct 23 04:00:21.988: INFO: Restart count of pod container-probe-139/busybox-ed6f7fd2-fe3a-48ca-b024-2891afe3dcd7 is now 1 (54.1183221s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:21.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-139" for this suite. • [SLOW TEST:64.182 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":1,"skipped":330,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:16.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:23.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1015" for this suite. • [SLOW TEST:7.097 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":7,"skipped":811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:19.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 04:00:20.001: INFO: Waiting up to 5m0s for pod "security-context-161f253f-401d-404f-b692-781d60225df0" in namespace "security-context-4927" to be "Succeeded or Failed" Oct 23 04:00:20.004: INFO: Pod "security-context-161f253f-401d-404f-b692-781d60225df0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386474ms Oct 23 04:00:22.007: INFO: Pod "security-context-161f253f-401d-404f-b692-781d60225df0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005452194s Oct 23 04:00:24.011: INFO: Pod "security-context-161f253f-401d-404f-b692-781d60225df0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009928402s Oct 23 04:00:26.015: INFO: Pod "security-context-161f253f-401d-404f-b692-781d60225df0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013656463s STEP: Saw pod success Oct 23 04:00:26.015: INFO: Pod "security-context-161f253f-401d-404f-b692-781d60225df0" satisfied condition "Succeeded or Failed" Oct 23 04:00:26.017: INFO: Trying to get logs from node node2 pod security-context-161f253f-401d-404f-b692-781d60225df0 container test-container: STEP: delete the pod Oct 23 04:00:26.176: INFO: Waiting for pod security-context-161f253f-401d-404f-b692-781d60225df0 to disappear Oct 23 04:00:26.178: INFO: Pod security-context-161f253f-401d-404f-b692-781d60225df0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:26.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4927" for this suite. • [SLOW TEST:6.224 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":4,"skipped":343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:26.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 23 04:00:26.278: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:26.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3479" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.035 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:04.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-3577527a-bf36-416f-8686-d7cfabc3f388 in namespace container-probe-8391 Oct 23 04:00:18.726: INFO: Started pod liveness-3577527a-bf36-416f-8686-d7cfabc3f388 in namespace container-probe-8391 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:00:18.730: INFO: Initial restart count of pod liveness-3577527a-bf36-416f-8686-d7cfabc3f388 is 0 Oct 23 04:00:28.755: INFO: Restart count of pod container-probe-8391/liveness-3577527a-bf36-416f-8686-d7cfabc3f388 is now 1 (10.025395773s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:28.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8391" for this suite. • [SLOW TEST:24.085 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":5,"skipped":555,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:23.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 23 04:00:23.638: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Oct 23 04:00:23.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5935 create -f -' Oct 23 04:00:24.058: INFO: stderr: "" Oct 23 04:00:24.058: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Oct 23 04:00:30.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5935 logs dapi-test-pod test-container' Oct 23 04:00:30.240: INFO: stderr: "" Oct 23 04:00:30.240: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5935\nMY_POD_IP=10.244.4.207\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Oct 23 04:00:30.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-5935 logs dapi-test-pod test-container' Oct 23 04:00:30.421: INFO: stderr: "" Oct 23 04:00:30.421: INFO: stdout: "KUBERNETES_PORT=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-5935\nMY_POD_IP=10.244.4.207\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:30.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5935" for this suite. • [SLOW TEST:6.833 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":8,"skipped":870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:30.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 23 04:00:30.517: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:30.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-6217" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1023 03:59:17.302390 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.302: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.304: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-0e31c0ab-b23a-468f-aaec-4f5838fa390a in namespace container-probe-4376 Oct 23 03:59:31.323: INFO: Started pod startup-0e31c0ab-b23a-468f-aaec-4f5838fa390a in namespace container-probe-4376 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 03:59:31.325: INFO: Initial restart count of pod startup-0e31c0ab-b23a-468f-aaec-4f5838fa390a is 0 Oct 23 04:00:31.462: INFO: Restart count of pod container-probe-4376/startup-0e31c0ab-b23a-468f-aaec-4f5838fa390a is now 1 (1m0.137003558s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:31.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4376" for this suite. • [SLOW TEST:74.196 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":1,"skipped":46,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:26.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 04:00:31.910: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:31.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5341" for this suite. • [SLOW TEST:5.083 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":5,"skipped":660,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:22.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Oct 23 04:00:22.061: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149" in namespace "security-context-test-6236" to be "Succeeded or Failed" Oct 23 04:00:22.064: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.600151ms Oct 23 04:00:24.067: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005670347s Oct 23 04:00:26.070: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008632267s Oct 23 04:00:28.074: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012329399s Oct 23 04:00:30.077: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016055034s Oct 23 04:00:32.080: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.018475976s Oct 23 04:00:32.080: INFO: Pod "alpine-nnp-nil-f8cec8b9-da4d-4bee-ac77-f74df46a6149" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:32.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6236" for this suite. • [SLOW TEST:10.162 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:32.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:32.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-1367" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:31.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Oct 23 04:00:31.998: INFO: Waiting up to 5m0s for pod "busybox-user-0-28d0c41c-d422-4870-a4b2-57748e6f9370" in namespace "security-context-test-4734" to be "Succeeded or Failed" Oct 23 04:00:32.001: INFO: Pod "busybox-user-0-28d0c41c-d422-4870-a4b2-57748e6f9370": Phase="Pending", Reason="", readiness=false. Elapsed: 2.625122ms Oct 23 04:00:34.005: INFO: Pod "busybox-user-0-28d0c41c-d422-4870-a4b2-57748e6f9370": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007424319s Oct 23 04:00:36.010: INFO: Pod "busybox-user-0-28d0c41c-d422-4870-a4b2-57748e6f9370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011693905s Oct 23 04:00:36.010: INFO: Pod "busybox-user-0-28d0c41c-d422-4870-a4b2-57748e6f9370" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:36.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4734" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:16.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Oct 23 04:00:42.778: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:42.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8812" for this suite. • [SLOW TEST:26.086 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":6,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:27.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-8a045168-31ce-4ec5-b9ff-ed6d804a6d8e in namespace container-probe-8974 Oct 23 03:59:37.369: INFO: Started pod startup-8a045168-31ce-4ec5-b9ff-ed6d804a6d8e in namespace container-probe-8974 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 03:59:37.370: INFO: Initial restart count of pod startup-8a045168-31ce-4ec5-b9ff-ed6d804a6d8e is 0 Oct 23 04:00:43.501: INFO: Restart count of pod container-probe-8974/startup-8a045168-31ce-4ec5-b9ff-ed6d804a6d8e is now 1 (1m6.130132415s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:43.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8974" for this suite. • [SLOW TEST:76.193 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":2,"skipped":59,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:50.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-70f12c2e-da38-4129-a555-5f0cbdeac2dc in namespace container-probe-9873 Oct 23 03:59:56.760: INFO: Started pod busybox-70f12c2e-da38-4129-a555-5f0cbdeac2dc in namespace container-probe-9873 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 03:59:56.762: INFO: Initial restart count of pod busybox-70f12c2e-da38-4129-a555-5f0cbdeac2dc is 0 Oct 23 04:00:46.872: INFO: Restart count of pod container-probe-9873/busybox-70f12c2e-da38-4129-a555-5f0cbdeac2dc is now 1 (50.10989819s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:46.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9873" for this suite. • [SLOW TEST:56.170 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:43.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Oct 23 04:00:43.723: INFO: Waiting up to 5m0s for pod "pod-always-succeede92b62ca-a120-4afc-ae33-526f504dcc8e" in namespace "pods-3371" to be "Succeeded or Failed" Oct 23 04:00:43.726: INFO: Pod "pod-always-succeede92b62ca-a120-4afc-ae33-526f504dcc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.697896ms Oct 23 04:00:45.729: INFO: Pod "pod-always-succeede92b62ca-a120-4afc-ae33-526f504dcc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006695873s Oct 23 04:00:47.733: INFO: Pod "pod-always-succeede92b62ca-a120-4afc-ae33-526f504dcc8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010054987s Oct 23 04:00:49.738: INFO: Pod "pod-always-succeede92b62ca-a120-4afc-ae33-526f504dcc8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015507353s STEP: Saw pod success Oct 23 04:00:49.738: INFO: Pod "pod-always-succeede92b62ca-a120-4afc-ae33-526f504dcc8e" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:51.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3371" for this suite. • [SLOW TEST:8.073 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":143,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label fizz-3dec7f8b-c7f5-4469-9563-78ab8315393f buzz STEP: verifying the node has the label foo-b2439ae9-a4cd-44a3-983a-960502219358 bar STEP: Trying to create runtimeclass and pod STEP: removing the label foo-b2439ae9-a4cd-44a3-983a-960502219358 off the node node1 STEP: verifying the node doesn't have the label foo-b2439ae9-a4cd-44a3-983a-960502219358 STEP: removing the label fizz-3dec7f8b-c7f5-4469-9563-78ab8315393f off the node node1 STEP: verifying the node doesn't have the label fizz-3dec7f8b-c7f5-4469-9563-78ab8315393f [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:57.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3104" for this suite. • [SLOW TEST:10.129 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":5,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:57.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:00:57.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4294" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":6,"skipped":558,"failed":0} SSSSSSSSSSS ------------------------------ Oct 23 04:00:57.484: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:51.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Oct 23 04:00:51.838: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:53.841: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:55.841: INFO: The status of Pod master is Running (Ready = true) Oct 23 04:00:55.854: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:57.856: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:59.858: INFO: The status of Pod slave is Running (Ready = true) Oct 23 04:00:59.874: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:01:01.878: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:01:03.878: INFO: The status of Pod private is Running (Ready = true) Oct 23 04:01:03.894: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:01:05.896: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:01:07.898: INFO: The status of Pod default is Running (Ready = true) Oct 23 04:01:07.903: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:07.903: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:07.991: INFO: Exec stderr: "" Oct 23 04:01:07.994: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:07.994: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.078: INFO: Exec stderr: "" Oct 23 04:01:08.080: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.080: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.185: INFO: Exec stderr: "" Oct 23 04:01:08.188: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.188: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.271: INFO: Exec stderr: "" Oct 23 04:01:08.274: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.274: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.369: INFO: Exec stderr: "" Oct 23 04:01:08.371: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.371: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.451: INFO: Exec stderr: "" Oct 23 04:01:08.453: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.453: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.531: INFO: Exec stderr: "" Oct 23 04:01:08.533: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.533: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.615: INFO: Exec stderr: "" Oct 23 04:01:08.618: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.618: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.703: INFO: Exec stderr: "" Oct 23 04:01:08.705: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.705: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.784: INFO: Exec stderr: "" Oct 23 04:01:08.786: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.786: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.867: INFO: Exec stderr: "" Oct 23 04:01:08.870: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.870: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:08.994: INFO: Exec stderr: "" Oct 23 04:01:08.998: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:08.998: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.078: INFO: Exec stderr: "" Oct 23 04:01:09.080: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.080: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.164: INFO: Exec stderr: "" Oct 23 04:01:09.167: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.167: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.251: INFO: Exec stderr: "" Oct 23 04:01:09.254: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.254: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.335: INFO: Exec stderr: "" Oct 23 04:01:09.337: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.337: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.420: INFO: Exec stderr: "" Oct 23 04:01:09.423: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.423: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.514: INFO: Exec stderr: "" Oct 23 04:01:09.518: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.518: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.603: INFO: Exec stderr: "" Oct 23 04:01:09.605: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:09.605: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:09.687: INFO: Exec stderr: "" Oct 23 04:01:11.705: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-3545"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-3545"/host; echo host > "/var/lib/kubelet/mount-propagation-3545"/host/file] Namespace:mount-propagation-3545 PodName:hostexec-node2-4zbw7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:01:11.705: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:11.798: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:11.798: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:11.880: INFO: pod master mount master: stdout: "master", stderr: "" error: Oct 23 04:01:11.883: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:11.883: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:11.978: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:11.982: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:11.982: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.060: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:12.062: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.062: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.145: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:12.147: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.147: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.273: INFO: pod master mount host: stdout: "host", stderr: "" error: Oct 23 04:01:12.276: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.276: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.376: INFO: pod slave mount master: stdout: "master", stderr: "" error: Oct 23 04:01:12.380: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.380: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.471: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Oct 23 04:01:12.474: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.474: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.554: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:12.556: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.556: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.636: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:12.639: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.639: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.718: INFO: pod slave mount host: stdout: "host", stderr: "" error: Oct 23 04:01:12.720: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.720: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.818: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:12.820: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.820: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:12.909: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:12.913: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:12.913: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.002: INFO: pod private mount private: stdout: "private", stderr: "" error: Oct 23 04:01:13.004: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.004: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.139: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:13.142: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.142: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.226: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:13.228: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.229: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.310: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:13.313: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.313: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.398: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:13.401: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.401: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.486: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:13.489: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.489: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.622: INFO: pod default mount default: stdout: "default", stderr: "" error: Oct 23 04:01:13.626: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.626: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.715: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:01:13.715: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-3545"/master/file` = master] Namespace:mount-propagation-3545 PodName:hostexec-node2-4zbw7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:01:13.715: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.796: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-3545"/slave/file] Namespace:mount-propagation-3545 PodName:hostexec-node2-4zbw7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:01:13.796: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.891: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-3545"/host] Namespace:mount-propagation-3545 PodName:hostexec-node2-4zbw7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:01:13.891: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:13.990: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-3545 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:13.990: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:14.087: INFO: Exec stderr: "" Oct 23 04:01:14.090: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-3545 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:14.090: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:14.178: INFO: Exec stderr: "" Oct 23 04:01:14.180: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-3545 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:14.180: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:14.299: INFO: Exec stderr: "" Oct 23 04:01:14.303: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-3545 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:01:14.303: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:01:14.438: INFO: Exec stderr: "" Oct 23 04:01:14.438: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-3545"] Namespace:mount-propagation-3545 PodName:hostexec-node2-4zbw7 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:01:14.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-4zbw7 in namespace mount-propagation-3545 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:01:14.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-3545" for this suite. • [SLOW TEST:22.741 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:28.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 23 04:00:28.977: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Oct 23 04:00:29.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-319 create -f -' Oct 23 04:00:29.433: INFO: stderr: "" Oct 23 04:00:29.433: INFO: stdout: "pod/liveness-exec created\n" Oct 23 04:00:29.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-319 create -f -' Oct 23 04:00:29.766: INFO: stderr: "" Oct 23 04:00:29.766: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Oct 23 04:00:35.775: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:37.778: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:39.780: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:39.782: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:41.785: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:41.786: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:43.789: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:43.789: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:45.792: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:45.792: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:47.797: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:47.797: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:49.800: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:49.800: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:51.807: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:51.807: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:53.811: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:53.811: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:55.814: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:55.814: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:57.817: INFO: Pod: liveness-http, restart count:0 Oct 23 04:00:57.817: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:59.821: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:00:59.821: INFO: Pod: liveness-http, restart count:0 Oct 23 04:01:01.826: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:01.826: INFO: Pod: liveness-http, restart count:0 Oct 23 04:01:03.831: INFO: Pod: liveness-http, restart count:0 Oct 23 04:01:03.831: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:05.835: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:05.835: INFO: Pod: liveness-http, restart count:0 Oct 23 04:01:07.842: INFO: Pod: liveness-http, restart count:0 Oct 23 04:01:07.842: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:09.846: INFO: Pod: liveness-http, restart count:0 Oct 23 04:01:09.846: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:11.849: INFO: Pod: liveness-http, restart count:1 Oct 23 04:01:11.849: INFO: Saw liveness-http restart, succeeded... Oct 23 04:01:11.849: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:13.853: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:15.857: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:17.861: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:19.866: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:21.872: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:23.878: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:25.882: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:27.886: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:29.890: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:31.894: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:33.898: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:35.901: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:37.905: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:39.909: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:01:41.914: INFO: Pod: liveness-exec, restart count:1 Oct 23 04:01:41.914: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:01:41.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-319" for this suite. • [SLOW TEST:72.976 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":6,"skipped":645,"failed":0} Oct 23 04:01:41.928: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:36.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-c3054952-5f31-4528-b255-aa0549095d54 in namespace container-probe-7098 Oct 23 04:00:44.393: INFO: Started pod busybox-c3054952-5f31-4528-b255-aa0549095d54 in namespace container-probe-7098 Oct 23 04:00:44.393: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (838ns elapsed) Oct 23 04:00:46.394: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (2.0013825s elapsed) Oct 23 04:00:48.397: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (4.003861689s elapsed) Oct 23 04:00:50.399: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (6.006205317s elapsed) Oct 23 04:00:52.399: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (8.006535272s elapsed) Oct 23 04:00:54.403: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (10.009851174s elapsed) Oct 23 04:00:56.404: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (12.010864789s elapsed) Oct 23 04:00:58.406: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (14.013259292s elapsed) Oct 23 04:01:00.409: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (16.015822866s elapsed) Oct 23 04:01:02.410: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (18.016750021s elapsed) Oct 23 04:01:04.411: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (20.018518972s elapsed) Oct 23 04:01:06.414: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (22.020797285s elapsed) Oct 23 04:01:08.416: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (24.023165619s elapsed) Oct 23 04:01:10.420: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (26.027302358s elapsed) Oct 23 04:01:12.421: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (28.028007859s elapsed) Oct 23 04:01:14.421: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (30.028203221s elapsed) Oct 23 04:01:16.422: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (32.029051674s elapsed) Oct 23 04:01:18.423: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (34.030573757s elapsed) Oct 23 04:01:20.425: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (36.03182859s elapsed) Oct 23 04:01:22.426: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (38.032655778s elapsed) Oct 23 04:01:24.427: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (40.034056195s elapsed) Oct 23 04:01:26.428: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (42.034800985s elapsed) Oct 23 04:01:28.432: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (44.03876538s elapsed) Oct 23 04:01:30.436: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (46.042598812s elapsed) Oct 23 04:01:32.436: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (48.043110325s elapsed) Oct 23 04:01:34.439: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (50.045726246s elapsed) Oct 23 04:01:36.441: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (52.047874016s elapsed) Oct 23 04:01:38.443: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (54.049842247s elapsed) Oct 23 04:01:40.444: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (56.050623102s elapsed) Oct 23 04:01:42.445: INFO: pod container-probe-7098/busybox-c3054952-5f31-4528-b255-aa0549095d54 is not ready (58.052161311s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:01:44.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7098" for this suite. • [SLOW TEST:68.116 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":7,"skipped":840,"failed":0} Oct 23 04:01:44.465: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:33.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Oct 23 04:00:39.373: INFO: watch delete seen for pod-submit-status-1-0 Oct 23 04:00:39.373: INFO: Pod pod-submit-status-1-0 on node node2 timings total=6.078015918s t=495ms run=0s execute=0s Oct 23 04:00:39.927: INFO: watch delete seen for pod-submit-status-2-0 Oct 23 04:00:39.927: INFO: Pod pod-submit-status-2-0 on node node2 timings total=6.632524091s t=1.109s run=0s execute=0s Oct 23 04:00:40.326: INFO: watch delete seen for pod-submit-status-0-0 Oct 23 04:00:40.326: INFO: Pod pod-submit-status-0-0 on node node2 timings total=7.031668385s t=666ms run=0s execute=0s Oct 23 04:00:45.418: INFO: watch delete seen for pod-submit-status-1-1 Oct 23 04:00:45.418: INFO: Pod pod-submit-status-1-1 on node node1 timings total=6.045464258s t=732ms run=0s execute=0s Oct 23 04:00:46.475: INFO: watch delete seen for pod-submit-status-0-1 Oct 23 04:00:46.475: INFO: Pod pod-submit-status-0-1 on node node1 timings total=6.148601903s t=1.157s run=0s execute=0s Oct 23 04:00:53.844: INFO: watch delete seen for pod-submit-status-1-2 Oct 23 04:00:53.844: INFO: Pod pod-submit-status-1-2 on node node1 timings total=8.425470389s t=1.035s run=2s execute=0s Oct 23 04:00:53.853: INFO: watch delete seen for pod-submit-status-2-1 Oct 23 04:00:53.853: INFO: Pod pod-submit-status-2-1 on node node1 timings total=13.925532078s t=1.349s run=0s execute=0s Oct 23 04:00:54.199: INFO: watch delete seen for pod-submit-status-0-2 Oct 23 04:00:54.199: INFO: Pod pod-submit-status-0-2 on node node2 timings total=7.723766736s t=854ms run=0s execute=0s Oct 23 04:01:04.197: INFO: watch delete seen for pod-submit-status-2-2 Oct 23 04:01:04.197: INFO: Pod pod-submit-status-2-2 on node node2 timings total=10.344334361s t=423ms run=0s execute=0s Oct 23 04:01:04.221: INFO: watch delete seen for pod-submit-status-1-3 Oct 23 04:01:04.221: INFO: Pod pod-submit-status-1-3 on node node2 timings total=10.377572234s t=1.505s run=0s execute=0s Oct 23 04:01:06.387: INFO: watch delete seen for pod-submit-status-2-3 Oct 23 04:01:06.387: INFO: Pod pod-submit-status-2-3 on node node1 timings total=2.189707445s t=920ms run=0s execute=0s Oct 23 04:01:06.465: INFO: watch delete seen for pod-submit-status-2-4 Oct 23 04:01:06.465: INFO: Pod pod-submit-status-2-4 on node node1 timings total=78.208339ms t=52ms run=0s execute=0s Oct 23 04:01:13.848: INFO: watch delete seen for pod-submit-status-2-5 Oct 23 04:01:13.848: INFO: Pod pod-submit-status-2-5 on node node1 timings total=7.382632237s t=1.69s run=3s execute=0s Oct 23 04:01:13.857: INFO: watch delete seen for pod-submit-status-1-4 Oct 23 04:01:13.857: INFO: Pod pod-submit-status-1-4 on node node1 timings total=9.635433802s t=1.066s run=0s execute=0s Oct 23 04:01:23.845: INFO: watch delete seen for pod-submit-status-1-5 Oct 23 04:01:23.845: INFO: Pod pod-submit-status-1-5 on node node1 timings total=9.988245178s t=1.628s run=0s execute=0s Oct 23 04:01:23.885: INFO: watch delete seen for pod-submit-status-2-6 Oct 23 04:01:23.885: INFO: Pod pod-submit-status-2-6 on node node1 timings total=10.036866552s t=1.251s run=0s execute=0s Oct 23 04:01:34.011: INFO: watch delete seen for pod-submit-status-2-7 Oct 23 04:01:34.011: INFO: Pod pod-submit-status-2-7 on node node1 timings total=10.126458746s t=1.688s run=2s execute=0s Oct 23 04:01:34.019: INFO: watch delete seen for pod-submit-status-1-6 Oct 23 04:01:34.019: INFO: Pod pod-submit-status-1-6 on node node1 timings total=10.173947834s t=85ms run=0s execute=0s Oct 23 04:01:34.965: INFO: watch delete seen for pod-submit-status-2-8 Oct 23 04:01:34.965: INFO: Pod pod-submit-status-2-8 on node node1 timings total=953.959262ms t=849ms run=0s execute=0s Oct 23 04:01:42.109: INFO: watch delete seen for pod-submit-status-1-7 Oct 23 04:01:42.109: INFO: Pod pod-submit-status-1-7 on node node1 timings total=8.090006413s t=1.64s run=0s execute=0s Oct 23 04:01:44.198: INFO: watch delete seen for pod-submit-status-2-9 Oct 23 04:01:44.198: INFO: Pod pod-submit-status-2-9 on node node2 timings total=9.232202157s t=1.945s run=3s execute=0s Oct 23 04:01:45.560: INFO: watch delete seen for pod-submit-status-1-8 Oct 23 04:01:45.560: INFO: Pod pod-submit-status-1-8 on node node2 timings total=3.450496999s t=334ms run=0s execute=0s Oct 23 04:01:46.582: INFO: watch delete seen for pod-submit-status-0-3 Oct 23 04:01:46.582: INFO: Pod pod-submit-status-0-3 on node node1 timings total=52.383004988s t=1.165s run=0s execute=0s Oct 23 04:01:52.534: INFO: watch delete seen for pod-submit-status-0-4 Oct 23 04:01:52.534: INFO: Pod pod-submit-status-0-4 on node node1 timings total=5.952424996s t=1.454s run=0s execute=0s Oct 23 04:01:53.842: INFO: watch delete seen for pod-submit-status-1-9 Oct 23 04:01:53.842: INFO: Pod pod-submit-status-1-9 on node node1 timings total=8.282298505s t=1.953s run=0s execute=0s Oct 23 04:01:53.852: INFO: watch delete seen for pod-submit-status-2-10 Oct 23 04:01:53.852: INFO: Pod pod-submit-status-2-10 on node node1 timings total=9.654213492s t=1.249s run=0s execute=0s Oct 23 04:02:03.850: INFO: watch delete seen for pod-submit-status-1-10 Oct 23 04:02:03.850: INFO: Pod pod-submit-status-1-10 on node node1 timings total=10.008086413s t=1.317s run=2s execute=0s Oct 23 04:02:04.201: INFO: watch delete seen for pod-submit-status-2-11 Oct 23 04:02:04.201: INFO: Pod pod-submit-status-2-11 on node node2 timings total=10.349496004s t=964ms run=3s execute=0s Oct 23 04:02:04.209: INFO: watch delete seen for pod-submit-status-0-5 Oct 23 04:02:04.209: INFO: Pod pod-submit-status-0-5 on node node2 timings total=11.674387322s t=1.695s run=3s execute=0s Oct 23 04:02:07.119: INFO: watch delete seen for pod-submit-status-1-11 Oct 23 04:02:07.119: INFO: Pod pod-submit-status-1-11 on node node1 timings total=3.268596513s t=314ms run=0s execute=0s Oct 23 04:02:10.662: INFO: watch delete seen for pod-submit-status-1-12 Oct 23 04:02:10.662: INFO: Pod pod-submit-status-1-12 on node node2 timings total=3.543360807s t=1.271s run=0s execute=0s Oct 23 04:02:14.204: INFO: watch delete seen for pod-submit-status-0-6 Oct 23 04:02:14.204: INFO: Pod pod-submit-status-0-6 on node node2 timings total=9.995147337s t=1.772s run=0s execute=0s Oct 23 04:02:14.211: INFO: watch delete seen for pod-submit-status-2-12 Oct 23 04:02:14.211: INFO: Pod pod-submit-status-2-12 on node node2 timings total=10.009858019s t=145ms run=0s execute=0s Oct 23 04:02:23.851: INFO: watch delete seen for pod-submit-status-0-7 Oct 23 04:02:23.851: INFO: Pod pod-submit-status-0-7 on node node1 timings total=9.646507695s t=1.204s run=0s execute=0s Oct 23 04:02:23.859: INFO: watch delete seen for pod-submit-status-1-13 Oct 23 04:02:23.859: INFO: Pod pod-submit-status-1-13 on node node1 timings total=13.196397656s t=650ms run=0s execute=0s Oct 23 04:02:24.210: INFO: watch delete seen for pod-submit-status-2-13 Oct 23 04:02:24.211: INFO: Pod pod-submit-status-2-13 on node node2 timings total=9.999144733s t=1.466s run=0s execute=0s Oct 23 04:02:26.769: INFO: watch delete seen for pod-submit-status-0-8 Oct 23 04:02:26.769: INFO: Pod pod-submit-status-0-8 on node node1 timings total=2.918156668s t=540ms run=0s execute=0s Oct 23 04:02:33.844: INFO: watch delete seen for pod-submit-status-1-14 Oct 23 04:02:33.844: INFO: Pod pod-submit-status-1-14 on node node1 timings total=9.985045833s t=1.607s run=0s execute=0s Oct 23 04:02:33.861: INFO: watch delete seen for pod-submit-status-0-9 Oct 23 04:02:33.861: INFO: Pod pod-submit-status-0-9 on node node1 timings total=7.091935987s t=310ms run=0s execute=0s Oct 23 04:02:34.204: INFO: watch delete seen for pod-submit-status-2-14 Oct 23 04:02:34.205: INFO: Pod pod-submit-status-2-14 on node node2 timings total=9.993900191s t=1.992s run=2s execute=0s Oct 23 04:02:43.847: INFO: watch delete seen for pod-submit-status-0-10 Oct 23 04:02:43.847: INFO: Pod pod-submit-status-0-10 on node node1 timings total=9.985974636s t=1.885s run=0s execute=0s Oct 23 04:02:52.350: INFO: watch delete seen for pod-submit-status-0-11 Oct 23 04:02:52.350: INFO: Pod pod-submit-status-0-11 on node node1 timings total=8.502736401s t=1.083s run=0s execute=0s Oct 23 04:03:03.845: INFO: watch delete seen for pod-submit-status-0-12 Oct 23 04:03:03.845: INFO: Pod pod-submit-status-0-12 on node node1 timings total=11.495587047s t=539ms run=0s execute=0s Oct 23 04:03:13.842: INFO: watch delete seen for pod-submit-status-0-13 Oct 23 04:03:13.842: INFO: Pod pod-submit-status-0-13 on node node1 timings total=9.996312292s t=1.998s run=2s execute=0s Oct 23 04:03:23.844: INFO: watch delete seen for pod-submit-status-0-14 Oct 23 04:03:23.844: INFO: Pod pod-submit-status-0-14 on node node1 timings total=10.002050065s t=346ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:03:23.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9162" for this suite. • [SLOW TEST:170.583 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":4,"skipped":898,"failed":0} Oct 23 04:03:23.855: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:17.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-8fa7a247-3c07-4a9d-a3d3-43505860f1c3 in namespace container-probe-3754 Oct 23 04:00:24.041: INFO: Started pod liveness-8fa7a247-3c07-4a9d-a3d3-43505860f1c3 in namespace container-probe-3754 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:00:24.043: INFO: Initial restart count of pod liveness-8fa7a247-3c07-4a9d-a3d3-43505860f1c3 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:04:24.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3754" for this suite. • [SLOW TEST:246.626 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":6,"skipped":932,"failed":0} Oct 23 04:04:24.619: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:42.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-33b32562-efab-4230-ad8c-16978f25cf5e in namespace container-probe-7538 Oct 23 04:00:48.897: INFO: Started pod startup-33b32562-efab-4230-ad8c-16978f25cf5e in namespace container-probe-7538 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:00:48.900: INFO: Initial restart count of pod startup-33b32562-efab-4230-ad8c-16978f25cf5e is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:04:49.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7538" for this suite. • [SLOW TEST:246.644 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":7,"skipped":644,"failed":0} Oct 23 04:04:49.499: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:31.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Oct 23 04:00:31.511: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Oct 23 04:00:32.523: INFO: node status heartbeat is unchanged for 1.004606193s, waiting for 1m20s Oct 23 04:00:33.524: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:00:33.529: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:32 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:32 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:32 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:00:34.523: INFO: node status heartbeat is unchanged for 999.575911ms, waiting for 1m20s Oct 23 04:00:35.522: INFO: node status heartbeat is unchanged for 1.998350765s, waiting for 1m20s Oct 23 04:00:36.522: INFO: node status heartbeat is unchanged for 2.997894496s, waiting for 1m20s Oct 23 04:00:37.522: INFO: node status heartbeat is unchanged for 3.998317514s, waiting for 1m20s Oct 23 04:00:38.523: INFO: node status heartbeat is unchanged for 4.999149709s, waiting for 1m20s Oct 23 04:00:39.523: INFO: node status heartbeat is unchanged for 5.999728086s, waiting for 1m20s Oct 23 04:00:40.522: INFO: node status heartbeat is unchanged for 6.998043865s, waiting for 1m20s Oct 23 04:00:41.521: INFO: node status heartbeat is unchanged for 7.997495615s, waiting for 1m20s Oct 23 04:00:42.524: INFO: node status heartbeat is unchanged for 8.99996677s, waiting for 1m20s Oct 23 04:00:43.521: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:00:43.526: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:42 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:42 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:32 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:42 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    NodeInfo: {MachineID: "82312646736a4d47a5e2182417308818", SystemUUID: "80B3CD56-852F-E711-906E-0017A4403562", BootID: "045f38e2-ca45-4931-a8ac-a14f5e34cbd2", KernelVersion: "3.10.0-1160.45.1.el7.x86_64", ...},    Images: []v1.ContainerImage{    ... // 32 identical elements    {Names: {"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., "k8s.gcr.io/e2e-test-images/nonewprivs:1.3"}, SizeBytes: 7107254},    {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, +  { +  Names: []string{ +  "gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c6"..., +  "gcr.io/authenticated-image-pulling/alpine:3.7", +  }, +  SizeBytes: 4206620, +  },    {Names: {"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad"..., "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}, SizeBytes: 1154361},    {Names: {"busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea"..., "busybox:1.28"}, SizeBytes: 1146369},    ... // 2 identical elements    },    VolumesInUse: nil,    VolumesAttached: nil,    Config: nil,   } Oct 23 04:00:44.523: INFO: node status heartbeat is unchanged for 1.002175731s, waiting for 1m20s Oct 23 04:00:45.522: INFO: node status heartbeat is unchanged for 2.001462689s, waiting for 1m20s Oct 23 04:00:46.523: INFO: node status heartbeat is unchanged for 3.002005016s, waiting for 1m20s Oct 23 04:00:47.522: INFO: node status heartbeat is unchanged for 4.001152413s, waiting for 1m20s Oct 23 04:00:48.522: INFO: node status heartbeat is unchanged for 5.001526975s, waiting for 1m20s Oct 23 04:00:49.522: INFO: node status heartbeat is unchanged for 6.000847247s, waiting for 1m20s Oct 23 04:00:50.523: INFO: node status heartbeat is unchanged for 7.002005109s, waiting for 1m20s Oct 23 04:00:51.541: INFO: node status heartbeat is unchanged for 8.020218505s, waiting for 1m20s Oct 23 04:00:52.522: INFO: node status heartbeat is unchanged for 9.000742243s, waiting for 1m20s Oct 23 04:00:53.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:00:53.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:52 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:52 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:42 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:52 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:00:54.523: INFO: node status heartbeat is unchanged for 999.679792ms, waiting for 1m20s Oct 23 04:00:55.522: INFO: node status heartbeat is unchanged for 1.998684205s, waiting for 1m20s Oct 23 04:00:56.523: INFO: node status heartbeat is unchanged for 3.00027492s, waiting for 1m20s Oct 23 04:00:57.522: INFO: node status heartbeat is unchanged for 3.998320547s, waiting for 1m20s Oct 23 04:00:58.524: INFO: node status heartbeat is unchanged for 5.00038134s, waiting for 1m20s Oct 23 04:00:59.525: INFO: node status heartbeat is unchanged for 6.001320651s, waiting for 1m20s Oct 23 04:01:00.522: INFO: node status heartbeat is unchanged for 6.998654254s, waiting for 1m20s Oct 23 04:01:01.522: INFO: node status heartbeat is unchanged for 7.999308475s, waiting for 1m20s Oct 23 04:01:02.522: INFO: node status heartbeat is unchanged for 8.998892219s, waiting for 1m20s Oct 23 04:01:03.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:01:03.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:02 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:02 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:00:52 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:02 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:01:04.522: INFO: node status heartbeat is unchanged for 999.116223ms, waiting for 1m20s Oct 23 04:01:05.522: INFO: node status heartbeat is unchanged for 1.998856231s, waiting for 1m20s Oct 23 04:01:06.524: INFO: node status heartbeat is unchanged for 3.000606902s, waiting for 1m20s Oct 23 04:01:07.523: INFO: node status heartbeat is unchanged for 4.000179872s, waiting for 1m20s Oct 23 04:01:08.524: INFO: node status heartbeat is unchanged for 5.001214902s, waiting for 1m20s Oct 23 04:01:09.526: INFO: node status heartbeat is unchanged for 6.002569544s, waiting for 1m20s Oct 23 04:01:10.523: INFO: node status heartbeat is unchanged for 6.999419498s, waiting for 1m20s Oct 23 04:01:11.523: INFO: node status heartbeat is unchanged for 7.999447717s, waiting for 1m20s Oct 23 04:01:12.523: INFO: node status heartbeat is unchanged for 8.999849319s, waiting for 1m20s Oct 23 04:01:13.524: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:01:13.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:12 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:12 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:02 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:12 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:01:14.524: INFO: node status heartbeat is unchanged for 999.938185ms, waiting for 1m20s Oct 23 04:01:15.522: INFO: node status heartbeat is unchanged for 1.99803967s, waiting for 1m20s Oct 23 04:01:16.523: INFO: node status heartbeat is unchanged for 2.999500789s, waiting for 1m20s Oct 23 04:01:17.523: INFO: node status heartbeat is unchanged for 3.999381764s, waiting for 1m20s Oct 23 04:01:18.523: INFO: node status heartbeat is unchanged for 4.999421792s, waiting for 1m20s Oct 23 04:01:19.523: INFO: node status heartbeat is unchanged for 5.999619234s, waiting for 1m20s Oct 23 04:01:20.522: INFO: node status heartbeat is unchanged for 6.997884315s, waiting for 1m20s Oct 23 04:01:21.523: INFO: node status heartbeat is unchanged for 7.998987005s, waiting for 1m20s Oct 23 04:01:22.524: INFO: node status heartbeat is unchanged for 8.999976051s, waiting for 1m20s Oct 23 04:01:23.524: INFO: node status heartbeat is unchanged for 10.000405352s, waiting for 1m20s Oct 23 04:01:24.524: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:01:24.529: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:23 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:23 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:12 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:23 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:01:25.523: INFO: node status heartbeat is unchanged for 998.268432ms, waiting for 1m20s Oct 23 04:01:26.525: INFO: node status heartbeat is unchanged for 2.000661413s, waiting for 1m20s Oct 23 04:01:27.523: INFO: node status heartbeat is unchanged for 2.998320536s, waiting for 1m20s Oct 23 04:01:28.523: INFO: node status heartbeat is unchanged for 3.998654377s, waiting for 1m20s Oct 23 04:01:29.523: INFO: node status heartbeat is unchanged for 4.999083468s, waiting for 1m20s Oct 23 04:01:30.522: INFO: node status heartbeat is unchanged for 5.997647316s, waiting for 1m20s Oct 23 04:01:31.523: INFO: node status heartbeat is unchanged for 6.998786555s, waiting for 1m20s Oct 23 04:01:32.523: INFO: node status heartbeat is unchanged for 7.998298937s, waiting for 1m20s Oct 23 04:01:33.522: INFO: node status heartbeat is unchanged for 8.997898199s, waiting for 1m20s Oct 23 04:01:34.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:01:34.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:33 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:33 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:23 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:33 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:01:35.522: INFO: node status heartbeat is unchanged for 999.345867ms, waiting for 1m20s Oct 23 04:01:36.525: INFO: node status heartbeat is unchanged for 2.001710158s, waiting for 1m20s Oct 23 04:01:37.522: INFO: node status heartbeat is unchanged for 2.999529227s, waiting for 1m20s Oct 23 04:01:38.522: INFO: node status heartbeat is unchanged for 3.99909153s, waiting for 1m20s Oct 23 04:01:39.523: INFO: node status heartbeat is unchanged for 5.000146436s, waiting for 1m20s Oct 23 04:01:40.523: INFO: node status heartbeat is unchanged for 6.000080683s, waiting for 1m20s Oct 23 04:01:41.522: INFO: node status heartbeat is unchanged for 6.999039237s, waiting for 1m20s Oct 23 04:01:42.523: INFO: node status heartbeat is unchanged for 7.999751348s, waiting for 1m20s Oct 23 04:01:43.524: INFO: node status heartbeat is unchanged for 9.000974265s, waiting for 1m20s Oct 23 04:01:44.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:01:44.526: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:43 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:43 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:33 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:43 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:01:45.522: INFO: node status heartbeat is unchanged for 1.000699459s, waiting for 1m20s Oct 23 04:01:46.522: INFO: node status heartbeat is unchanged for 2.000149428s, waiting for 1m20s Oct 23 04:01:47.522: INFO: node status heartbeat is unchanged for 3.000154369s, waiting for 1m20s Oct 23 04:01:48.523: INFO: node status heartbeat is unchanged for 4.001061061s, waiting for 1m20s Oct 23 04:01:49.522: INFO: node status heartbeat is unchanged for 5.000062481s, waiting for 1m20s Oct 23 04:01:50.522: INFO: node status heartbeat is unchanged for 6.000774717s, waiting for 1m20s Oct 23 04:01:51.523: INFO: node status heartbeat is unchanged for 7.001422216s, waiting for 1m20s Oct 23 04:01:52.522: INFO: node status heartbeat is unchanged for 8.000398345s, waiting for 1m20s Oct 23 04:01:53.523: INFO: node status heartbeat is unchanged for 9.00112183s, waiting for 1m20s Oct 23 04:01:54.523: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:01:54.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:43 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:01:55.523: INFO: node status heartbeat is unchanged for 1.000501831s, waiting for 1m20s Oct 23 04:01:56.524: INFO: node status heartbeat is unchanged for 2.001074987s, waiting for 1m20s Oct 23 04:01:57.522: INFO: node status heartbeat is unchanged for 2.999467549s, waiting for 1m20s Oct 23 04:01:58.526: INFO: node status heartbeat is unchanged for 4.003056476s, waiting for 1m20s Oct 23 04:01:59.526: INFO: node status heartbeat is unchanged for 5.003048726s, waiting for 1m20s Oct 23 04:02:00.522: INFO: node status heartbeat is unchanged for 5.999419144s, waiting for 1m20s Oct 23 04:02:01.522: INFO: node status heartbeat is unchanged for 6.999601305s, waiting for 1m20s Oct 23 04:02:02.523: INFO: node status heartbeat is unchanged for 8.000103062s, waiting for 1m20s Oct 23 04:02:03.522: INFO: node status heartbeat is unchanged for 8.999311867s, waiting for 1m20s Oct 23 04:02:04.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:02:04.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:01:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:02:05.522: INFO: node status heartbeat is unchanged for 999.015036ms, waiting for 1m20s Oct 23 04:02:06.522: INFO: node status heartbeat is unchanged for 1.999532769s, waiting for 1m20s Oct 23 04:02:07.522: INFO: node status heartbeat is unchanged for 2.999491739s, waiting for 1m20s Oct 23 04:02:08.522: INFO: node status heartbeat is unchanged for 3.999137102s, waiting for 1m20s Oct 23 04:02:09.523: INFO: node status heartbeat is unchanged for 5.000087775s, waiting for 1m20s Oct 23 04:02:10.524: INFO: node status heartbeat is unchanged for 6.001039493s, waiting for 1m20s Oct 23 04:02:11.523: INFO: node status heartbeat is unchanged for 7.000467487s, waiting for 1m20s Oct 23 04:02:12.523: INFO: node status heartbeat is unchanged for 8.000056903s, waiting for 1m20s Oct 23 04:02:13.522: INFO: node status heartbeat is unchanged for 8.999215484s, waiting for 1m20s Oct 23 04:02:14.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:02:14.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:14 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:14 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:14 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:02:15.523: INFO: node status heartbeat is unchanged for 1.000030025s, waiting for 1m20s Oct 23 04:02:16.523: INFO: node status heartbeat is unchanged for 2.000419203s, waiting for 1m20s Oct 23 04:02:17.522: INFO: node status heartbeat is unchanged for 2.999027132s, waiting for 1m20s Oct 23 04:02:18.523: INFO: node status heartbeat is unchanged for 3.999989024s, waiting for 1m20s Oct 23 04:02:19.523: INFO: node status heartbeat is unchanged for 5.000197172s, waiting for 1m20s Oct 23 04:02:20.521: INFO: node status heartbeat is unchanged for 5.998170195s, waiting for 1m20s Oct 23 04:02:21.523: INFO: node status heartbeat is unchanged for 6.999658458s, waiting for 1m20s Oct 23 04:02:22.522: INFO: node status heartbeat is unchanged for 7.99909571s, waiting for 1m20s Oct 23 04:02:23.522: INFO: node status heartbeat is unchanged for 8.99929386s, waiting for 1m20s Oct 23 04:02:24.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:02:24.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:24 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:24 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:24 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:02:25.522: INFO: node status heartbeat is unchanged for 999.308388ms, waiting for 1m20s Oct 23 04:02:26.522: INFO: node status heartbeat is unchanged for 1.99987308s, waiting for 1m20s Oct 23 04:02:27.522: INFO: node status heartbeat is unchanged for 2.999528324s, waiting for 1m20s Oct 23 04:02:28.522: INFO: node status heartbeat is unchanged for 3.999502118s, waiting for 1m20s Oct 23 04:02:29.523: INFO: node status heartbeat is unchanged for 5.000732188s, waiting for 1m20s Oct 23 04:02:30.522: INFO: node status heartbeat is unchanged for 5.999853325s, waiting for 1m20s Oct 23 04:02:31.523: INFO: node status heartbeat is unchanged for 7.000473519s, waiting for 1m20s Oct 23 04:02:32.523: INFO: node status heartbeat is unchanged for 8.000443265s, waiting for 1m20s Oct 23 04:02:33.522: INFO: node status heartbeat is unchanged for 8.999825732s, waiting for 1m20s Oct 23 04:02:34.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:02:34.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:34 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:34 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:34 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:02:35.521: INFO: node status heartbeat is unchanged for 998.212358ms, waiting for 1m20s Oct 23 04:02:36.522: INFO: node status heartbeat is unchanged for 1.998504165s, waiting for 1m20s Oct 23 04:02:37.523: INFO: node status heartbeat is unchanged for 2.999348376s, waiting for 1m20s Oct 23 04:02:38.523: INFO: node status heartbeat is unchanged for 3.999275584s, waiting for 1m20s Oct 23 04:02:39.523: INFO: node status heartbeat is unchanged for 5.000138985s, waiting for 1m20s Oct 23 04:02:40.522: INFO: node status heartbeat is unchanged for 5.99828573s, waiting for 1m20s Oct 23 04:02:41.523: INFO: node status heartbeat is unchanged for 6.999873935s, waiting for 1m20s Oct 23 04:02:42.522: INFO: node status heartbeat is unchanged for 7.998601644s, waiting for 1m20s Oct 23 04:02:43.523: INFO: node status heartbeat is unchanged for 8.999678357s, waiting for 1m20s Oct 23 04:02:44.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:02:44.526: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:02:45.526: INFO: node status heartbeat is unchanged for 1.004179631s, waiting for 1m20s Oct 23 04:02:46.525: INFO: node status heartbeat is unchanged for 2.003551008s, waiting for 1m20s Oct 23 04:02:47.522: INFO: node status heartbeat is unchanged for 3.000457939s, waiting for 1m20s Oct 23 04:02:48.525: INFO: node status heartbeat is unchanged for 4.003930325s, waiting for 1m20s Oct 23 04:02:49.524: INFO: node status heartbeat is unchanged for 5.002119841s, waiting for 1m20s Oct 23 04:02:50.522: INFO: node status heartbeat is unchanged for 6.000942363s, waiting for 1m20s Oct 23 04:02:51.525: INFO: node status heartbeat is unchanged for 7.003664846s, waiting for 1m20s Oct 23 04:02:52.522: INFO: node status heartbeat is unchanged for 8.000713982s, waiting for 1m20s Oct 23 04:02:53.522: INFO: node status heartbeat is unchanged for 9.000806339s, waiting for 1m20s Oct 23 04:02:54.524: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:02:54.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:02:55.521: INFO: node status heartbeat is unchanged for 997.543192ms, waiting for 1m20s Oct 23 04:02:56.524: INFO: node status heartbeat is unchanged for 2.000346447s, waiting for 1m20s Oct 23 04:02:57.522: INFO: node status heartbeat is unchanged for 2.997875705s, waiting for 1m20s Oct 23 04:02:58.524: INFO: node status heartbeat is unchanged for 4.000099154s, waiting for 1m20s Oct 23 04:02:59.525: INFO: node status heartbeat is unchanged for 5.001011153s, waiting for 1m20s Oct 23 04:03:00.522: INFO: node status heartbeat is unchanged for 5.997662724s, waiting for 1m20s Oct 23 04:03:01.525: INFO: node status heartbeat is unchanged for 7.000793917s, waiting for 1m20s Oct 23 04:03:02.522: INFO: node status heartbeat is unchanged for 7.998613776s, waiting for 1m20s Oct 23 04:03:03.525: INFO: node status heartbeat is unchanged for 9.001094116s, waiting for 1m20s Oct 23 04:03:04.524: INFO: node status heartbeat is unchanged for 10.000377901s, waiting for 1m20s Oct 23 04:03:05.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:03:05.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:02:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:03:06.524: INFO: node status heartbeat is unchanged for 1.001610754s, waiting for 1m20s Oct 23 04:03:07.522: INFO: node status heartbeat is unchanged for 1.999461484s, waiting for 1m20s Oct 23 04:03:08.525: INFO: node status heartbeat is unchanged for 3.002065542s, waiting for 1m20s Oct 23 04:03:09.523: INFO: node status heartbeat is unchanged for 4.00066488s, waiting for 1m20s Oct 23 04:03:10.522: INFO: node status heartbeat is unchanged for 4.999663371s, waiting for 1m20s Oct 23 04:03:11.525: INFO: node status heartbeat is unchanged for 6.002298183s, waiting for 1m20s Oct 23 04:03:12.522: INFO: node status heartbeat is unchanged for 6.999239742s, waiting for 1m20s Oct 23 04:03:13.525: INFO: node status heartbeat is unchanged for 8.00199163s, waiting for 1m20s Oct 23 04:03:14.524: INFO: node status heartbeat is unchanged for 9.001653263s, waiting for 1m20s Oct 23 04:03:15.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:03:15.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:14 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:14 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:14 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:03:16.525: INFO: node status heartbeat is unchanged for 1.002318131s, waiting for 1m20s Oct 23 04:03:17.523: INFO: node status heartbeat is unchanged for 2.000189544s, waiting for 1m20s Oct 23 04:03:18.526: INFO: node status heartbeat is unchanged for 3.00285564s, waiting for 1m20s Oct 23 04:03:19.525: INFO: node status heartbeat is unchanged for 4.001961213s, waiting for 1m20s Oct 23 04:03:20.522: INFO: node status heartbeat is unchanged for 4.999700986s, waiting for 1m20s Oct 23 04:03:21.523: INFO: node status heartbeat is unchanged for 6.000464075s, waiting for 1m20s Oct 23 04:03:22.523: INFO: node status heartbeat is unchanged for 7.000424887s, waiting for 1m20s Oct 23 04:03:23.524: INFO: node status heartbeat is unchanged for 8.001800977s, waiting for 1m20s Oct 23 04:03:24.523: INFO: node status heartbeat is unchanged for 9.000572962s, waiting for 1m20s Oct 23 04:03:25.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:03:25.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:24 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:24 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:14 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:24 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:03:26.522: INFO: node status heartbeat is unchanged for 1.000384677s, waiting for 1m20s Oct 23 04:03:27.522: INFO: node status heartbeat is unchanged for 2.000522009s, waiting for 1m20s Oct 23 04:03:28.525: INFO: node status heartbeat is unchanged for 3.003276463s, waiting for 1m20s Oct 23 04:03:29.523: INFO: node status heartbeat is unchanged for 4.000853589s, waiting for 1m20s Oct 23 04:03:30.521: INFO: node status heartbeat is unchanged for 4.999164338s, waiting for 1m20s Oct 23 04:03:31.522: INFO: node status heartbeat is unchanged for 5.999856941s, waiting for 1m20s Oct 23 04:03:32.522: INFO: node status heartbeat is unchanged for 7.000462582s, waiting for 1m20s Oct 23 04:03:33.522: INFO: node status heartbeat is unchanged for 8.000016199s, waiting for 1m20s Oct 23 04:03:34.522: INFO: node status heartbeat is unchanged for 8.999995904s, waiting for 1m20s Oct 23 04:03:35.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:03:35.528: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:34 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:34 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:24 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:34 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:03:36.522: INFO: node status heartbeat is unchanged for 999.514711ms, waiting for 1m20s Oct 23 04:03:37.522: INFO: node status heartbeat is unchanged for 1.999110396s, waiting for 1m20s Oct 23 04:03:38.523: INFO: node status heartbeat is unchanged for 2.999866119s, waiting for 1m20s Oct 23 04:03:39.523: INFO: node status heartbeat is unchanged for 3.999740479s, waiting for 1m20s Oct 23 04:03:40.522: INFO: node status heartbeat is unchanged for 4.998908118s, waiting for 1m20s Oct 23 04:03:41.522: INFO: node status heartbeat is unchanged for 5.998828402s, waiting for 1m20s Oct 23 04:03:42.522: INFO: node status heartbeat is unchanged for 6.999175862s, waiting for 1m20s Oct 23 04:03:43.522: INFO: node status heartbeat is unchanged for 7.999519201s, waiting for 1m20s Oct 23 04:03:44.523: INFO: node status heartbeat is unchanged for 9.00030596s, waiting for 1m20s Oct 23 04:03:45.521: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:03:45.526: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:34 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:44 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:03:46.522: INFO: node status heartbeat is unchanged for 1.001490556s, waiting for 1m20s Oct 23 04:03:47.522: INFO: node status heartbeat is unchanged for 2.000908331s, waiting for 1m20s Oct 23 04:03:48.522: INFO: node status heartbeat is unchanged for 3.001207843s, waiting for 1m20s Oct 23 04:03:49.523: INFO: node status heartbeat is unchanged for 4.001863789s, waiting for 1m20s Oct 23 04:03:50.523: INFO: node status heartbeat is unchanged for 5.002133725s, waiting for 1m20s Oct 23 04:03:51.522: INFO: node status heartbeat is unchanged for 6.001461656s, waiting for 1m20s Oct 23 04:03:52.521: INFO: node status heartbeat is unchanged for 7.000492253s, waiting for 1m20s Oct 23 04:03:53.523: INFO: node status heartbeat is unchanged for 8.00237041s, waiting for 1m20s Oct 23 04:03:54.525: INFO: node status heartbeat is unchanged for 9.00448637s, waiting for 1m20s Oct 23 04:03:55.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:03:55.526: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:44 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:54 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:03:56.522: INFO: node status heartbeat is unchanged for 1.00020042s, waiting for 1m20s Oct 23 04:03:57.523: INFO: node status heartbeat is unchanged for 2.001304072s, waiting for 1m20s Oct 23 04:03:58.522: INFO: node status heartbeat is unchanged for 3.000455418s, waiting for 1m20s Oct 23 04:03:59.522: INFO: node status heartbeat is unchanged for 4.000215364s, waiting for 1m20s Oct 23 04:04:00.523: INFO: node status heartbeat is unchanged for 5.001578557s, waiting for 1m20s Oct 23 04:04:01.522: INFO: node status heartbeat is unchanged for 6.000373444s, waiting for 1m20s Oct 23 04:04:02.522: INFO: node status heartbeat is unchanged for 7.000656684s, waiting for 1m20s Oct 23 04:04:03.523: INFO: node status heartbeat is unchanged for 8.001134058s, waiting for 1m20s Oct 23 04:04:04.522: INFO: node status heartbeat is unchanged for 9.0008488s, waiting for 1m20s Oct 23 04:04:05.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:04:05.526: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:03:54 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:04 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:04:06.522: INFO: node status heartbeat is unchanged for 1.00063918s, waiting for 1m20s Oct 23 04:04:07.522: INFO: node status heartbeat is unchanged for 1.99994855s, waiting for 1m20s Oct 23 04:04:08.524: INFO: node status heartbeat is unchanged for 3.002693476s, waiting for 1m20s Oct 23 04:04:09.523: INFO: node status heartbeat is unchanged for 4.001429157s, waiting for 1m20s Oct 23 04:04:10.521: INFO: node status heartbeat is unchanged for 4.999162442s, waiting for 1m20s Oct 23 04:04:11.522: INFO: node status heartbeat is unchanged for 5.999928882s, waiting for 1m20s Oct 23 04:04:12.522: INFO: node status heartbeat is unchanged for 7.00017853s, waiting for 1m20s Oct 23 04:04:13.522: INFO: node status heartbeat is unchanged for 8.000775162s, waiting for 1m20s Oct 23 04:04:14.523: INFO: node status heartbeat is unchanged for 9.0010055s, waiting for 1m20s Oct 23 04:04:15.523: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:04:15.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:04 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:04:16.522: INFO: node status heartbeat is unchanged for 999.142725ms, waiting for 1m20s Oct 23 04:04:17.522: INFO: node status heartbeat is unchanged for 1.999312953s, waiting for 1m20s Oct 23 04:04:18.522: INFO: node status heartbeat is unchanged for 2.999383752s, waiting for 1m20s Oct 23 04:04:19.523: INFO: node status heartbeat is unchanged for 4.000057522s, waiting for 1m20s Oct 23 04:04:20.522: INFO: node status heartbeat is unchanged for 4.999666897s, waiting for 1m20s Oct 23 04:04:21.523: INFO: node status heartbeat is unchanged for 5.999777421s, waiting for 1m20s Oct 23 04:04:22.522: INFO: node status heartbeat is unchanged for 6.999359574s, waiting for 1m20s Oct 23 04:04:23.523: INFO: node status heartbeat is unchanged for 8.000097969s, waiting for 1m20s Oct 23 04:04:24.525: INFO: node status heartbeat is unchanged for 9.002173301s, waiting for 1m20s Oct 23 04:04:25.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:04:25.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:04:26.524: INFO: node status heartbeat is unchanged for 1.001337779s, waiting for 1m20s Oct 23 04:04:27.522: INFO: node status heartbeat is unchanged for 1.999639472s, waiting for 1m20s Oct 23 04:04:28.522: INFO: node status heartbeat is unchanged for 2.999849403s, waiting for 1m20s Oct 23 04:04:29.524: INFO: node status heartbeat is unchanged for 4.00148467s, waiting for 1m20s Oct 23 04:04:30.523: INFO: node status heartbeat is unchanged for 5.000758163s, waiting for 1m20s Oct 23 04:04:31.522: INFO: node status heartbeat is unchanged for 5.999319242s, waiting for 1m20s Oct 23 04:04:32.521: INFO: node status heartbeat is unchanged for 6.999051418s, waiting for 1m20s Oct 23 04:04:33.523: INFO: node status heartbeat is unchanged for 8.00067213s, waiting for 1m20s Oct 23 04:04:34.523: INFO: node status heartbeat is unchanged for 9.001061039s, waiting for 1m20s Oct 23 04:04:35.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:04:35.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:04:36.522: INFO: node status heartbeat is unchanged for 999.341955ms, waiting for 1m20s Oct 23 04:04:37.523: INFO: node status heartbeat is unchanged for 2.000426533s, waiting for 1m20s Oct 23 04:04:38.522: INFO: node status heartbeat is unchanged for 3.000094702s, waiting for 1m20s Oct 23 04:04:39.524: INFO: node status heartbeat is unchanged for 4.001485139s, waiting for 1m20s Oct 23 04:04:40.523: INFO: node status heartbeat is unchanged for 5.00060313s, waiting for 1m20s Oct 23 04:04:41.524: INFO: node status heartbeat is unchanged for 6.001630784s, waiting for 1m20s Oct 23 04:04:42.553: INFO: node status heartbeat is unchanged for 7.030135689s, waiting for 1m20s Oct 23 04:04:43.526: INFO: node status heartbeat is unchanged for 8.003479025s, waiting for 1m20s Oct 23 04:04:44.525: INFO: node status heartbeat is unchanged for 9.002377367s, waiting for 1m20s Oct 23 04:04:45.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:04:45.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:04:46.525: INFO: node status heartbeat is unchanged for 1.001964721s, waiting for 1m20s Oct 23 04:04:47.522: INFO: node status heartbeat is unchanged for 1.999510292s, waiting for 1m20s Oct 23 04:04:48.526: INFO: node status heartbeat is unchanged for 3.002877469s, waiting for 1m20s Oct 23 04:04:49.523: INFO: node status heartbeat is unchanged for 4.000031836s, waiting for 1m20s Oct 23 04:04:50.523: INFO: node status heartbeat is unchanged for 5.000130003s, waiting for 1m20s Oct 23 04:04:51.524: INFO: node status heartbeat is unchanged for 6.001330162s, waiting for 1m20s Oct 23 04:04:52.523: INFO: node status heartbeat is unchanged for 7.000212439s, waiting for 1m20s Oct 23 04:04:53.523: INFO: node status heartbeat is unchanged for 8.00057997s, waiting for 1m20s Oct 23 04:04:54.525: INFO: node status heartbeat is unchanged for 9.002108998s, waiting for 1m20s Oct 23 04:04:55.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:04:55.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:04:56.524: INFO: node status heartbeat is unchanged for 1.001592519s, waiting for 1m20s Oct 23 04:04:57.524: INFO: node status heartbeat is unchanged for 2.001595493s, waiting for 1m20s Oct 23 04:04:58.526: INFO: node status heartbeat is unchanged for 3.003323696s, waiting for 1m20s Oct 23 04:04:59.525: INFO: node status heartbeat is unchanged for 4.002343232s, waiting for 1m20s Oct 23 04:05:00.525: INFO: node status heartbeat is unchanged for 5.002414622s, waiting for 1m20s Oct 23 04:05:01.521: INFO: node status heartbeat is unchanged for 5.999241533s, waiting for 1m20s Oct 23 04:05:02.522: INFO: node status heartbeat is unchanged for 7.000038072s, waiting for 1m20s Oct 23 04:05:03.522: INFO: node status heartbeat is unchanged for 8.000001281s, waiting for 1m20s Oct 23 04:05:04.523: INFO: node status heartbeat is unchanged for 9.000713591s, waiting for 1m20s Oct 23 04:05:05.523: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:05:05.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:04:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:05:06.524: INFO: node status heartbeat is unchanged for 1.000862771s, waiting for 1m20s Oct 23 04:05:07.522: INFO: node status heartbeat is unchanged for 1.999550097s, waiting for 1m20s Oct 23 04:05:08.525: INFO: node status heartbeat is unchanged for 3.002403229s, waiting for 1m20s Oct 23 04:05:09.523: INFO: node status heartbeat is unchanged for 4.00023778s, waiting for 1m20s Oct 23 04:05:10.522: INFO: node status heartbeat is unchanged for 4.999553758s, waiting for 1m20s Oct 23 04:05:11.525: INFO: node status heartbeat is unchanged for 6.002001544s, waiting for 1m20s Oct 23 04:05:12.524: INFO: node status heartbeat is unchanged for 7.001274333s, waiting for 1m20s Oct 23 04:05:13.523: INFO: node status heartbeat is unchanged for 8.000116883s, waiting for 1m20s Oct 23 04:05:14.526: INFO: node status heartbeat is unchanged for 9.002866629s, waiting for 1m20s Oct 23 04:05:15.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:05:15.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:05:16.524: INFO: node status heartbeat is unchanged for 1.002494103s, waiting for 1m20s Oct 23 04:05:17.524: INFO: node status heartbeat is unchanged for 2.001796608s, waiting for 1m20s Oct 23 04:05:18.525: INFO: node status heartbeat is unchanged for 3.003312676s, waiting for 1m20s Oct 23 04:05:19.525: INFO: node status heartbeat is unchanged for 4.002862103s, waiting for 1m20s Oct 23 04:05:20.522: INFO: node status heartbeat is unchanged for 4.99999578s, waiting for 1m20s Oct 23 04:05:21.523: INFO: node status heartbeat is unchanged for 6.001190357s, waiting for 1m20s Oct 23 04:05:22.522: INFO: node status heartbeat is unchanged for 7.000022969s, waiting for 1m20s Oct 23 04:05:23.524: INFO: node status heartbeat is unchanged for 8.00201299s, waiting for 1m20s Oct 23 04:05:24.525: INFO: node status heartbeat is unchanged for 9.002620377s, waiting for 1m20s Oct 23 04:05:25.522: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:05:25.527: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:08 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:05:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:05:26.524: INFO: node status heartbeat is unchanged for 1.002187237s, waiting for 1m20s Oct 23 04:05:27.523: INFO: node status heartbeat is unchanged for 2.000710451s, waiting for 1m20s Oct 23 04:05:28.524: INFO: node status heartbeat is unchanged for 3.002102072s, waiting for 1m20s Oct 23 04:05:29.523: INFO: node status heartbeat is unchanged for 4.000973994s, waiting for 1m20s Oct 23 04:05:30.524: INFO: node status heartbeat is unchanged for 5.001710509s, waiting for 1m20s Oct 23 04:05:31.523: INFO: node status heartbeat is unchanged for 6.000522271s, waiting for 1m20s Oct 23 04:05:31.526: INFO: node status heartbeat is unchanged for 6.003793728s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:05:31.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6341" for this suite. • [SLOW TEST:300.051 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":49,"failed":0} Oct 23 04:05:31.546: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:00:30.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Oct 23 04:00:30.717: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:32.720: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:34.722: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:36.721: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:00:38.722: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Oct 23 04:02:37.008: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-10-23 04:01:40 +0000 UTC restartedAt=2021-10-23 04:02:35 +0000 UTC (55s) STEP: getting restart delay-1 Oct 23 04:04:14.434: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-10-23 04:02:40 +0000 UTC restartedAt=2021-10-23 04:04:13 +0000 UTC (1m33s) STEP: getting restart delay-2 Oct 23 04:07:07.204: INFO: getRestartDelay: restartCount = 6, finishedAt=2021-10-23 04:04:18 +0000 UTC restartedAt=2021-10-23 04:07:06 +0000 UTC (2m48s) STEP: updating the image Oct 23 04:07:07.714: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Oct 23 04:07:31.785: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-23 04:07:17 +0000 UTC restartedAt=2021-10-23 04:07:30 +0000 UTC (13s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:07:31.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4082" for this suite. • [SLOW TEST:421.121 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":9,"skipped":971,"failed":0} Oct 23 04:07:31.796: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 03:59:17.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W1023 03:59:17.177932 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 03:59:17.178: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 03:59:17.179: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Oct 23 03:59:17.196: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:19.200: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:21.199: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:23.200: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:25.201: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 03:59:27.199: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Oct 23 04:10:59.657: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-10-23 04:05:55 +0000 UTC restartedAt=2021-10-23 04:10:58 +0000 UTC (5m3s) Oct 23 04:16:16.100: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-23 04:11:03 +0000 UTC restartedAt=2021-10-23 04:16:14 +0000 UTC (5m11s) Oct 23 04:21:25.439: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-10-23 04:16:19 +0000 UTC restartedAt=2021-10-23 04:21:24 +0000 UTC (5m5s) STEP: getting restart delay after a capped delay Oct 23 04:26:32.821: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-10-23 04:21:29 +0000 UTC restartedAt=2021-10-23 04:26:31 +0000 UTC (5m2s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:26:32.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2296" for this suite. • [SLOW TEST:1635.673 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":12,"failed":0} Oct 23 04:26:32.834: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":4,"skipped":157,"failed":0} Oct 23 04:01:14.539: INFO: Running AfterSuite actions on all nodes Oct 23 04:26:32.869: INFO: Running AfterSuite actions on node 1 Oct 23 04:26:32.869: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1635.933 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 27m17.475449945s Test Suite Failed