Running Suite: Kubernetes e2e suite =================================== Random Seed: 1636777663 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Nov 13 04:27:45.425: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.427: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 13 04:27:45.449: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 13 04:27:45.515: INFO: The status of Pod cmk-init-discover-node1-vkj2s is Succeeded, skipping waiting Nov 13 04:27:45.515: INFO: The status of Pod cmk-init-discover-node2-5f4hp is Succeeded, skipping waiting Nov 13 04:27:45.515: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 13 04:27:45.515: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Nov 13 04:27:45.515: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 13 04:27:45.527: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Nov 13 04:27:45.527: INFO: e2e test version: v1.21.5 Nov 13 04:27:45.528: INFO: kube-apiserver version: v1.21.1 Nov 13 04:27:45.528: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.534: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ Nov 13 04:27:45.533: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.555: INFO: Cluster IP family: ipv4 Nov 13 04:27:45.535: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.557: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Nov 13 04:27:45.544: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.564: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ Nov 13 04:27:45.549: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.571: INFO: Cluster IP family: ipv4 Nov 13 04:27:45.548: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.571: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ Nov 13 04:27:45.551: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.576: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 13 04:27:45.555: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.578: INFO: Cluster IP family: ipv4 Nov 13 04:27:45.558: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.578: INFO: Cluster IP family: ipv4 SS ------------------------------ Nov 13 04:27:45.557: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:27:45.580: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:45.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W1113 04:27:45.661063 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:45.661: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:45.664: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:45.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3217" for this suite. •SSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:46.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1113 04:27:46.336902 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:46.337: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:46.338: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Nov 13 04:27:46.352: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-71e1f72a-bdfb-4318-ab7c-8a74c57bfb32" in namespace "security-context-test-6445" to be "Succeeded or Failed" Nov 13 04:27:46.354: INFO: Pod "busybox-readonly-true-71e1f72a-bdfb-4318-ab7c-8a74c57bfb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.712277ms Nov 13 04:27:48.357: INFO: Pod "busybox-readonly-true-71e1f72a-bdfb-4318-ab7c-8a74c57bfb32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005863026s Nov 13 04:27:50.360: INFO: Pod "busybox-readonly-true-71e1f72a-bdfb-4318-ab7c-8a74c57bfb32": Phase="Failed", Reason="", readiness=false. Elapsed: 4.008689943s Nov 13 04:27:50.360: INFO: Pod "busybox-readonly-true-71e1f72a-bdfb-4318-ab7c-8a74c57bfb32" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:50.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6445" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":290,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:50.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:53.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5820" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:53.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Nov 13 04:27:53.750: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:53.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-7037" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:45.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1113 04:27:45.739149 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:45.739: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:45.743: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Nov 13 04:27:45.756: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186" in namespace "security-context-test-5972" to be "Succeeded or Failed" Nov 13 04:27:45.759: INFO: Pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.897839ms Nov 13 04:27:47.762: INFO: Pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0054595s Nov 13 04:27:49.765: INFO: Pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008694444s Nov 13 04:27:51.771: INFO: Pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015310432s Nov 13 04:27:53.777: INFO: Pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020555671s Nov 13 04:27:53.777: INFO: Pod "alpine-nnp-true-c76229a1-a629-40ab-935c-8527b990d186" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:54.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5972" for this suite. • [SLOW TEST:8.313 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":24,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:46.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1113 04:27:46.061378 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:46.061: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:46.063: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:54.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5719" for this suite. • [SLOW TEST:8.097 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:46.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1113 04:27:46.266511 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:46.266: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:46.268: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:54.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1621" for this suite. • [SLOW TEST:8.085 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:45.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1113 04:27:45.731190 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:45.731: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:45.733: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Nov 13 04:27:45.746: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6" in namespace "security-context-test-3416" to be "Succeeded or Failed" Nov 13 04:27:45.749: INFO: Pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.453523ms Nov 13 04:27:47.753: INFO: Pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007016974s Nov 13 04:27:49.757: INFO: Pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010978437s Nov 13 04:27:51.761: INFO: Pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01472462s Nov 13 04:27:53.767: INFO: Pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020919443s Nov 13 04:27:53.767: INFO: Pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6" satisfied condition "Succeeded or Failed" Nov 13 04:27:54.368: INFO: Got logs for pod "busybox-privileged-true-d54cd29b-d171-48c1-b7cd-d46e0e9f8fe6": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:54.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3416" for this suite. • [SLOW TEST:8.668 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ SSS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:46.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1113 04:27:46.030809 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:46.031: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:46.032: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 13 04:27:46.044: INFO: Waiting up to 5m0s for pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789" in namespace "security-context-3484" to be "Succeeded or Failed" Nov 13 04:27:46.046: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789": Phase="Pending", Reason="", readiness=false. Elapsed: 1.919061ms Nov 13 04:27:48.051: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006418186s Nov 13 04:27:50.056: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011008446s Nov 13 04:27:52.061: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016307416s Nov 13 04:27:54.064: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019023061s Nov 13 04:27:56.067: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022888403s STEP: Saw pod success Nov 13 04:27:56.067: INFO: Pod "security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789" satisfied condition "Succeeded or Failed" Nov 13 04:27:56.070: INFO: Trying to get logs from node node2 pod security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789 container test-container: STEP: delete the pod Nov 13 04:27:56.083: INFO: Waiting for pod security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789 to disappear Nov 13 04:27:56.084: INFO: Pod security-context-0bd9c3e6-1997-49a8-ab50-3aecc1454789 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:56.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3484" for this suite. • [SLOW TEST:10.084 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:46.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1113 04:27:46.033813 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:46.034: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:46.037: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-a5f9377a-ace1-48a7-8e0b-7bcb8f74afcc in namespace container-probe-4415 Nov 13 04:27:54.057: INFO: Started pod startup-override-a5f9377a-ace1-48a7-8e0b-7bcb8f74afcc in namespace container-probe-4415 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:27:54.059: INFO: Initial restart count of pod startup-override-a5f9377a-ace1-48a7-8e0b-7bcb8f74afcc is 0 Nov 13 04:27:58.070: INFO: Restart count of pod container-probe-4415/startup-override-a5f9377a-ace1-48a7-8e0b-7bcb8f74afcc is now 1 (4.010409794s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:58.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4415" for this suite. • [SLOW TEST:12.075 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:54.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Nov 13 04:27:54.086: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-6096" to be "Succeeded or Failed" Nov 13 04:27:54.088: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57926ms Nov 13 04:27:56.091: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005687977s Nov 13 04:27:58.096: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010769044s Nov 13 04:27:58.096: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:27:58.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6096" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:54.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 04:27:54.473: INFO: Waiting up to 5m0s for pod "security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37" in namespace "security-context-6050" to be "Succeeded or Failed" Nov 13 04:27:54.475: INFO: Pod "security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37": Phase="Pending", Reason="", readiness=false. Elapsed: 1.933372ms Nov 13 04:27:56.479: INFO: Pod "security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004999664s Nov 13 04:27:58.482: INFO: Pod "security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008570489s Nov 13 04:28:00.486: INFO: Pod "security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012648297s STEP: Saw pod success Nov 13 04:28:00.486: INFO: Pod "security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37" satisfied condition "Succeeded or Failed" Nov 13 04:28:00.489: INFO: Trying to get logs from node node1 pod security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37 container test-container: STEP: delete the pod Nov 13 04:28:00.501: INFO: Waiting for pod security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37 to disappear Nov 13 04:28:00.503: INFO: Pod security-context-09f9d025-d5a0-41b6-9191-5c7d59b75c37 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:00.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6050" for this suite. • [SLOW TEST:6.069 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":2,"skipped":314,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:56.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:02.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-370" for this suite. • [SLOW TEST:6.052 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:53.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Nov 13 04:27:53.829: INFO: Waiting up to 5m0s for pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db" in namespace "security-context-3559" to be "Succeeded or Failed" Nov 13 04:27:53.832: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361439ms Nov 13 04:27:55.836: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649487s Nov 13 04:27:57.841: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012095713s Nov 13 04:27:59.845: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016024573s Nov 13 04:28:01.849: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020156394s Nov 13 04:28:03.852: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.02328199s STEP: Saw pod success Nov 13 04:28:03.852: INFO: Pod "security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db" satisfied condition "Succeeded or Failed" Nov 13 04:28:03.855: INFO: Trying to get logs from node node2 pod security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db container test-container: STEP: delete the pod Nov 13 04:28:03.946: INFO: Waiting for pod security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db to disappear Nov 13 04:28:03.949: INFO: Pod security-context-ba45a6c8-21e9-4649-9aa2-de5d447926db no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:03.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3559" for this suite. • [SLOW TEST:10.164 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:02.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Nov 13 04:28:02.521: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-70ed919c-efcf-4f81-bf0f-5bf72f7a99d3" in namespace "security-context-test-5462" to be "Succeeded or Failed" Nov 13 04:28:02.524: INFO: Pod "alpine-nnp-nil-70ed919c-efcf-4f81-bf0f-5bf72f7a99d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431713ms Nov 13 04:28:04.527: INFO: Pod "alpine-nnp-nil-70ed919c-efcf-4f81-bf0f-5bf72f7a99d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005873696s Nov 13 04:28:06.532: INFO: Pod "alpine-nnp-nil-70ed919c-efcf-4f81-bf0f-5bf72f7a99d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010182993s Nov 13 04:28:08.539: INFO: Pod "alpine-nnp-nil-70ed919c-efcf-4f81-bf0f-5bf72f7a99d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0171802s Nov 13 04:28:08.539: INFO: Pod "alpine-nnp-nil-70ed919c-efcf-4f81-bf0f-5bf72f7a99d3" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:08.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5462" for this suite. • [SLOW TEST:6.128 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":3,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:04.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 04:28:04.300: INFO: Waiting up to 5m0s for pod "security-context-6001c3c2-37fd-41d6-a345-370fe667b071" in namespace "security-context-6671" to be "Succeeded or Failed" Nov 13 04:28:04.302: INFO: Pod "security-context-6001c3c2-37fd-41d6-a345-370fe667b071": Phase="Pending", Reason="", readiness=false. Elapsed: 2.41073ms Nov 13 04:28:06.308: INFO: Pod "security-context-6001c3c2-37fd-41d6-a345-370fe667b071": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008083897s Nov 13 04:28:08.316: INFO: Pod "security-context-6001c3c2-37fd-41d6-a345-370fe667b071": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01587543s Nov 13 04:28:10.321: INFO: Pod "security-context-6001c3c2-37fd-41d6-a345-370fe667b071": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021716834s STEP: Saw pod success Nov 13 04:28:10.321: INFO: Pod "security-context-6001c3c2-37fd-41d6-a345-370fe667b071" satisfied condition "Succeeded or Failed" Nov 13 04:28:10.324: INFO: Trying to get logs from node node2 pod security-context-6001c3c2-37fd-41d6-a345-370fe667b071 container test-container: STEP: delete the pod Nov 13 04:28:10.335: INFO: Waiting for pod security-context-6001c3c2-37fd-41d6-a345-370fe667b071 to disappear Nov 13 04:28:10.337: INFO: Pod security-context-6001c3c2-37fd-41d6-a345-370fe667b071 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:10.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6671" for this suite. • [SLOW TEST:6.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":4,"skipped":619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:09.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 04:28:09.107: INFO: Waiting up to 5m0s for pod "security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13" in namespace "security-context-2367" to be "Succeeded or Failed" Nov 13 04:28:09.110: INFO: Pod "security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.750394ms Nov 13 04:28:11.113: INFO: Pod "security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006209316s Nov 13 04:28:13.118: INFO: Pod "security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01125861s STEP: Saw pod success Nov 13 04:28:13.118: INFO: Pod "security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13" satisfied condition "Succeeded or Failed" Nov 13 04:28:13.121: INFO: Trying to get logs from node node2 pod security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13 container test-container: STEP: delete the pod Nov 13 04:28:13.133: INFO: Waiting for pod security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13 to disappear Nov 13 04:28:13.135: INFO: Pod security-context-f4f8c0c2-1ceb-4de3-8ea1-0d6d3ab95e13 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:13.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2367" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":4,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:45.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1113 04:27:45.812278 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:45.812: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:45.814: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-b7278508-e5c4-48f8-aa2d-d15e5db98fdd in namespace container-probe-316 Nov 13 04:27:53.833: INFO: Started pod liveness-b7278508-e5c4-48f8-aa2d-d15e5db98fdd in namespace container-probe-316 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:27:53.835: INFO: Initial restart count of pod liveness-b7278508-e5c4-48f8-aa2d-d15e5db98fdd is 0 Nov 13 04:28:17.883: INFO: Restart count of pod container-probe-316/liveness-b7278508-e5c4-48f8-aa2d-d15e5db98fdd is now 1 (24.047425725s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:17.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-316" for this suite. • [SLOW TEST:32.107 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":62,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:13.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Nov 13 04:28:13.240: INFO: Waiting up to 5m0s for pod "pod-always-succeed2e96b99d-d5d6-4cf1-8051-0f5108e4e12d" in namespace "pods-3023" to be "Succeeded or Failed" Nov 13 04:28:13.243: INFO: Pod "pod-always-succeed2e96b99d-d5d6-4cf1-8051-0f5108e4e12d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.125872ms Nov 13 04:28:15.247: INFO: Pod "pod-always-succeed2e96b99d-d5d6-4cf1-8051-0f5108e4e12d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007187915s Nov 13 04:28:17.252: INFO: Pod "pod-always-succeed2e96b99d-d5d6-4cf1-8051-0f5108e4e12d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011819369s STEP: Saw pod success Nov 13 04:28:17.252: INFO: Pod "pod-always-succeed2e96b99d-d5d6-4cf1-8051-0f5108e4e12d" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:19.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3023" for this suite. • [SLOW TEST:6.062 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":5,"skipped":591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:58.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Nov 13 04:28:28.323: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:28.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9099" for this suite. • [SLOW TEST:30.095 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":2,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:28.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 13 04:28:28.624: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:28.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-5639" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:00.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Nov 13 04:28:00.598: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:02.602: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:04.602: INFO: The status of Pod master is Running (Ready = true) Nov 13 04:28:04.617: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:06.622: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:08.621: INFO: The status of Pod slave is Running (Ready = true) Nov 13 04:28:08.635: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:10.639: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:12.644: INFO: The status of Pod private is Running (Ready = true) Nov 13 04:28:12.658: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:14.663: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:16.664: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:18.664: INFO: The status of Pod default is Running (Ready = true) Nov 13 04:28:18.669: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:18.669: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:20.511: INFO: Exec stderr: "" Nov 13 04:28:20.513: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:20.513: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:21.900: INFO: Exec stderr: "" Nov 13 04:28:21.903: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:21.903: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:22.426: INFO: Exec stderr: "" Nov 13 04:28:22.429: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:22.429: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:23.166: INFO: Exec stderr: "" Nov 13 04:28:23.168: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:23.168: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:24.298: INFO: Exec stderr: "" Nov 13 04:28:24.300: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:24.300: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:24.578: INFO: Exec stderr: "" Nov 13 04:28:24.580: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:24.580: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:24.880: INFO: Exec stderr: "" Nov 13 04:28:24.882: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:24.883: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:25.183: INFO: Exec stderr: "" Nov 13 04:28:25.186: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:25.186: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:25.738: INFO: Exec stderr: "" Nov 13 04:28:25.740: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:25.740: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:25.860: INFO: Exec stderr: "" Nov 13 04:28:25.862: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:25.862: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:26.323: INFO: Exec stderr: "" Nov 13 04:28:26.326: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:26.326: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:27.103: INFO: Exec stderr: "" Nov 13 04:28:27.106: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:27.106: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:27.861: INFO: Exec stderr: "" Nov 13 04:28:27.864: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:27.864: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:27.989: INFO: Exec stderr: "" Nov 13 04:28:27.991: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:27.991: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:28.600: INFO: Exec stderr: "" Nov 13 04:28:28.603: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:28.603: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:28.766: INFO: Exec stderr: "" Nov 13 04:28:28.768: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:28.768: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:28.966: INFO: Exec stderr: "" Nov 13 04:28:28.969: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:28.969: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:29.071: INFO: Exec stderr: "" Nov 13 04:28:29.073: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:29.073: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:29.161: INFO: Exec stderr: "" Nov 13 04:28:29.165: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:29.165: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:29.262: INFO: Exec stderr: "" Nov 13 04:28:37.279: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-4999"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-4999"/host; echo host > "/var/lib/kubelet/mount-propagation-4999"/host/file] Namespace:mount-propagation-4999 PodName:hostexec-node1-lj8nm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 04:28:37.279: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.386: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.386: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.488: INFO: pod master mount master: stdout: "master", stderr: "" error: Nov 13 04:28:37.491: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.491: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.583: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:37.587: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.587: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.678: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:37.680: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.680: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.775: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:37.777: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.778: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.877: INFO: pod master mount host: stdout: "host", stderr: "" error: Nov 13 04:28:37.881: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.881: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:37.971: INFO: pod slave mount master: stdout: "master", stderr: "" error: Nov 13 04:28:37.974: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:37.974: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.064: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Nov 13 04:28:38.066: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.066: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.169: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.171: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.171: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.264: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.266: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.266: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.364: INFO: pod slave mount host: stdout: "host", stderr: "" error: Nov 13 04:28:38.367: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.367: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.455: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.457: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.457: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.572: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.574: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.574: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.667: INFO: pod private mount private: stdout: "private", stderr: "" error: Nov 13 04:28:38.670: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.670: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.785: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.787: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.787: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.873: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.876: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.876: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:38.970: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:38.972: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:38.972: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.058: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:39.061: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.061: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.146: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:39.150: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.150: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.233: INFO: pod default mount default: stdout: "default", stderr: "" error: Nov 13 04:28:39.235: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.235: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.352: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Nov 13 04:28:39.352: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-4999"/master/file` = master] Namespace:mount-propagation-4999 PodName:hostexec-node1-lj8nm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 04:28:39.352: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.447: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-4999"/slave/file] Namespace:mount-propagation-4999 PodName:hostexec-node1-lj8nm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 04:28:39.447: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.558: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-4999"/host] Namespace:mount-propagation-4999 PodName:hostexec-node1-lj8nm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 04:28:39.558: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.673: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-4999 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.673: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.793: INFO: Exec stderr: "" Nov 13 04:28:39.795: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-4999 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.795: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.892: INFO: Exec stderr: "" Nov 13 04:28:39.895: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-4999 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.895: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:39.991: INFO: Exec stderr: "" Nov 13 04:28:39.994: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-4999 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:28:39.994: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:28:40.083: INFO: Exec stderr: "" Nov 13 04:28:40.083: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-4999"] Namespace:mount-propagation-4999 PodName:hostexec-node1-lj8nm ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Nov 13 04:28:40.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-lj8nm in namespace mount-propagation-4999 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:40.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-4999" for this suite. • [SLOW TEST:39.648 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":3,"skipped":335,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:40.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:44.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7867" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":4,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:44.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-8733/configmap-test-56dbea23-4ae0-4c4e-821b-0b722a1a73fd STEP: Updating configMap configmap-8733/configmap-test-56dbea23-4ae0-4c4e-821b-0b722a1a73fd STEP: Verifying update of ConfigMap configmap-8733/configmap-test-56dbea23-4ae0-4c4e-821b-0b722a1a73fd [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:44.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8733" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":5,"skipped":413,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:28.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-6a1d9f22-8aee-4357-9386-bb7d02939cfb in namespace container-probe-6840 Nov 13 04:28:44.711: INFO: Started pod liveness-override-6a1d9f22-8aee-4357-9386-bb7d02939cfb in namespace container-probe-6840 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:28:44.713: INFO: Initial restart count of pod liveness-override-6a1d9f22-8aee-4357-9386-bb7d02939cfb is 0 Nov 13 04:28:46.722: INFO: Restart count of pod container-probe-6840/liveness-override-6a1d9f22-8aee-4357-9386-bb7d02939cfb is now 1 (2.008899495s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:46.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6840" for this suite. • [SLOW TEST:18.068 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":3,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:44.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 13 04:28:44.609: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Nov 13 04:28:44.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2589 create -f -' Nov 13 04:28:45.021: INFO: stderr: "" Nov 13 04:28:45.021: INFO: stdout: "secret/test-secret created\n" Nov 13 04:28:45.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2589 create -f -' Nov 13 04:28:45.348: INFO: stderr: "" Nov 13 04:28:45.348: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Nov 13 04:28:59.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2589 logs secret-test-pod test-container' Nov 13 04:28:59.523: INFO: stderr: "" Nov 13 04:28:59.523: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:28:59.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2589" for this suite. • [SLOW TEST:14.952 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":6,"skipped":476,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:45.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W1113 04:27:45.949021 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Nov 13 04:27:45.950: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Nov 13 04:27:45.953: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-2c569bfb-f00b-4350-973d-81641070ff2b in namespace container-probe-4878 Nov 13 04:27:51.977: INFO: Started pod startup-2c569bfb-f00b-4350-973d-81641070ff2b in namespace container-probe-4878 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:27:51.980: INFO: Initial restart count of pod startup-2c569bfb-f00b-4350-973d-81641070ff2b is 0 Nov 13 04:29:02.131: INFO: Restart count of pod container-probe-4878/startup-2c569bfb-f00b-4350-973d-81641070ff2b is now 1 (1m10.151775442s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:02.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4878" for this suite. • [SLOW TEST:76.220 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":1,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:02.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:04.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2310" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":2,"skipped":235,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:18.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 in namespace kubelet-7204 I1113 04:28:18.267679 28 runners.go:190] Created replication controller with name: cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09, namespace: kubelet-7204, replica count: 20 I1113 04:28:28.319069 28 runners.go:190] cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 04:28:38.319326 28 runners.go:190] cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 Pods: 20 out of 20 created, 16 running, 4 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1113 04:28:48.321967 28 runners.go:190] cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Nov 13 04:28:49.322: INFO: Checking pods on node node2 via /runningpods endpoint Nov 13 04:28:49.322: INFO: Checking pods on node node1 via /runningpods endpoint Nov 13 04:28:49.344: INFO: Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.384 3742.23 1526.14 "runtime" 0.098 628.30 262.41 "kubelet" 0.098 628.30 262.41 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.441 4081.77 1695.77 "runtime" 0.094 539.12 245.84 "kubelet" 0.094 539.12 245.84 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.383 6743.02 2563.16 "runtime" 0.747 2670.49 625.66 "kubelet" 0.747 2670.49 625.66 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.087 4262.57 1220.53 "runtime" 0.855 1671.15 604.85 "kubelet" 0.855 1671.15 604.85 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.345 5086.40 1733.73 "runtime" 0.114 674.80 290.95 "kubelet" 0.114 674.80 290.95 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 in namespace kubelet-7204, will wait for the garbage collector to delete the pods Nov 13 04:28:49.401: INFO: Deleting ReplicationController cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 took: 3.961032ms Nov 13 04:28:50.002: INFO: Terminating ReplicationController cleanup20-b3ad1a5f-5518-449d-8628-6a65cb71de09 pods took: 601.148115ms Nov 13 04:29:05.804: INFO: Checking pods on node node2 via /runningpods endpoint Nov 13 04:29:05.804: INFO: Checking pods on node node1 via /runningpods endpoint Nov 13 04:29:05.820: INFO: Deleting 20 pods on 2 nodes completed in 1.017233603s after the RC was deleted Nov 13 04:29:05.821: INFO: CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.342 0.345 0.389 0.470 0.470 0.470 "runtime" 0.000 0.000 0.114 0.128 0.128 0.128 0.128 "kubelet" 0.000 0.000 0.114 0.128 0.128 0.128 0.128 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.321 0.321 0.336 0.336 0.336 "runtime" 0.000 0.000 0.098 0.098 0.098 0.098 0.098 "kubelet" 0.000 0.000 0.098 0.098 0.098 0.098 0.098 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.402 0.441 0.462 0.529 0.529 0.529 "runtime" 0.000 0.000 0.099 0.099 0.104 0.104 0.104 "kubelet" 0.000 0.000 0.099 0.099 0.104 0.104 0.104 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.568 1.568 1.704 1.704 1.704 "runtime" 0.000 0.000 0.438 0.747 0.747 0.747 0.747 "kubelet" 0.000 0.000 0.438 0.747 0.747 0.747 0.747 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.227 1.227 1.920 1.920 1.920 "runtime" 0.000 0.000 0.822 0.855 0.855 0.855 0.855 "kubelet" 0.000 0.000 0.822 0.855 0.855 0.855 0.855 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:05.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-7204" for this suite. • [SLOW TEST:47.653 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":2,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:54.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-35c512ec-2150-439a-9296-0e19ac593044 in namespace container-probe-7555 Nov 13 04:28:06.448: INFO: Started pod busybox-35c512ec-2150-439a-9296-0e19ac593044 in namespace container-probe-7555 Nov 13 04:28:06.448: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (1.339µs elapsed) Nov 13 04:28:08.448: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (2.000042072s elapsed) Nov 13 04:28:10.448: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (4.000604933s elapsed) Nov 13 04:28:12.451: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (6.003224168s elapsed) Nov 13 04:28:14.452: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (8.004768776s elapsed) Nov 13 04:28:16.454: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (10.005962444s elapsed) Nov 13 04:28:18.455: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (12.007169286s elapsed) Nov 13 04:28:20.455: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (14.007447798s elapsed) Nov 13 04:28:22.456: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (16.008794775s elapsed) Nov 13 04:28:24.457: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (18.009168073s elapsed) Nov 13 04:28:26.458: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (20.010798487s elapsed) Nov 13 04:28:28.460: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (22.011930068s elapsed) Nov 13 04:28:30.460: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (24.012440453s elapsed) Nov 13 04:28:32.462: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (26.014595873s elapsed) Nov 13 04:28:34.463: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (28.015167845s elapsed) Nov 13 04:28:36.464: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (30.01615676s elapsed) Nov 13 04:28:38.467: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (32.019520813s elapsed) Nov 13 04:28:40.468: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (34.020437333s elapsed) Nov 13 04:28:42.471: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (36.023100336s elapsed) Nov 13 04:28:44.472: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (38.023903532s elapsed) Nov 13 04:28:46.474: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (40.026333559s elapsed) Nov 13 04:28:48.476: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (42.028841861s elapsed) Nov 13 04:28:50.477: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (44.029732068s elapsed) Nov 13 04:28:52.480: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (46.032145207s elapsed) Nov 13 04:28:54.481: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (48.032944946s elapsed) Nov 13 04:28:56.481: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (50.033203978s elapsed) Nov 13 04:28:58.482: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (52.034800161s elapsed) Nov 13 04:29:00.484: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (54.03585684s elapsed) Nov 13 04:29:02.485: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (56.037806564s elapsed) Nov 13 04:29:04.486: INFO: pod container-probe-7555/busybox-35c512ec-2150-439a-9296-0e19ac593044 is not ready (58.037979062s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:06.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7555" for this suite. • [SLOW TEST:72.093 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":2,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:58.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Nov 13 04:27:58.295: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Nov 13 04:27:58.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7815 create -f -' Nov 13 04:27:58.859: INFO: stderr: "" Nov 13 04:27:58.859: INFO: stdout: "pod/liveness-exec created\n" Nov 13 04:27:58.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7815 create -f -' Nov 13 04:27:59.176: INFO: stderr: "" Nov 13 04:27:59.176: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Nov 13 04:28:03.185: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:05.188: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:07.190: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:09.184: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:09.195: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:11.188: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:11.198: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:13.193: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:13.200: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:15.197: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:15.204: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:17.200: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:17.206: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:19.205: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:19.210: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:21.208: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:21.213: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:23.213: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:23.216: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:25.216: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:25.218: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:27.221: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:27.221: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:29.224: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:29.224: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:31.229: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:31.229: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:33.233: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:33.233: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:35.236: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:35.236: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:37.240: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:37.240: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:39.244: INFO: Pod: liveness-http, restart count:0 Nov 13 04:28:39.244: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:41.247: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:41.247: INFO: Pod: liveness-http, restart count:1 Nov 13 04:28:41.247: INFO: Saw liveness-http restart, succeeded... Nov 13 04:28:43.252: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:45.256: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:47.259: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:49.263: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:51.266: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:53.271: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:55.275: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:57.279: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:28:59.282: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:01.284: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:03.289: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:05.293: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:07.297: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:09.300: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:11.304: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:13.308: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:15.313: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:17.318: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:19.323: INFO: Pod: liveness-exec, restart count:0 Nov 13 04:29:21.326: INFO: Pod: liveness-exec, restart count:1 Nov 13 04:29:21.326: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:21.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7815" for this suite. • [SLOW TEST:83.066 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":3,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:21.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:21.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-4090" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":4,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:06.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:22.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8617" for this suite. • [SLOW TEST:16.079 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":3,"skipped":110,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:22.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Nov 13 04:29:22.793: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-8047" to be "Succeeded or Failed" Nov 13 04:29:22.795: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207287ms Nov 13 04:29:24.800: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006570705s Nov 13 04:29:26.805: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011442839s Nov 13 04:29:26.805: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:26.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8047" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":4,"skipped":130,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:21.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Nov 13 04:29:21.814: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:29:23.818: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:29:25.817: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:29:27.819: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Nov 13 04:29:27.822: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5182 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:29:27.822: INFO: >>> kubeConfig: /root/.kube/config Nov 13 04:29:28.316: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-5182 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:29:28.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Nov 13 04:29:28.475: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5182 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Nov 13 04:29:28.475: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:28.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-5182" for this suite. • [SLOW TEST:6.913 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":5,"skipped":264,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:28.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1113 04:29:30.881399 38 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 214 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc0014a0f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0042409c0, 0xc0014a0f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc001b2cfc0, 0xc0042409c0, 0xc000d31b60, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc001b2cfc0, 0xc0042409c0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc001b2cfc0, 0xc0042409c0, 0xc001b2cfc0, 0xc0042409c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0042409c0, 0x14, 0xc003b0d8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc0022b6f20, 0xc000055b78, 0x14, 0xc003b0d8c0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0012564e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0012564e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001016980, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002cd0ff0, 0x0, 0x768f9a0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002cd0ff0, 0x768f9a0, 0xc000190840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0022f6000, 0xc002cd0ff0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0022f6000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0022f6000, 0xc0022ec030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000194280, 0x7f5793cf4ae0, 0xc00078b080, 0x6f05d9d, 0x14, 0xc0042ef6b0, 0x3, 0x3, 0x7745ab8, 0xc000190840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc00078b080, 0x6f05d9d, 0x14, 0xc00332fa40, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc00078b080, 0x6f05d9d, 0x14, 0xc002aae3c0, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00078b080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00078b080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00078b080, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-8568". STEP: Found 2 events. Nov 13 04:29:30.884: INFO: At 2021-11-13 04:29:28 +0000 UTC - event for startup-d0548019-7372-4a57-80c2-dc43f86948c0: {default-scheduler } Scheduled: Successfully assigned container-probe-8568/startup-d0548019-7372-4a57-80c2-dc43f86948c0 to node2 Nov 13 04:29:30.884: INFO: At 2021-11-13 04:29:30 +0000 UTC - event for startup-d0548019-7372-4a57-80c2-dc43f86948c0: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Nov 13 04:29:30.887: INFO: POD NODE PHASE GRACE CONDITIONS Nov 13 04:29:30.887: INFO: startup-d0548019-7372-4a57-80c2-dc43f86948c0 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 04:29:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 04:29:28 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-11-13 04:29:28 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-11-13 04:29:28 +0000 UTC }] Nov 13 04:29:30.887: INFO: Nov 13 04:29:30.891: INFO: Logging node info for node master1 Nov 13 04:29:30.894: INFO: Node Info: &Node{ObjectMeta:{master1 56d66c54-e52b-494a-a758-e4b658c4b245 160868 0 2021-11-12 21:05:50 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:05:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:25 +0000 UTC,LastTransitionTime:2021-11-12 21:11:25 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:23 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:23 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:23 +0000 UTC,LastTransitionTime:2021-11-12 21:05:48 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 04:29:23 +0000 UTC,LastTransitionTime:2021-11-12 21:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94e600d00e79450a9fb632d8473a11eb,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:6e4bb815-8b93-47c2-9321-93e7ada261f6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:57d1a39684ee5a5b5d34638cff843561d440d0f996303b2e841cabf228a4c2af nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 04:29:30.894: INFO: Logging kubelet events for node master1 Nov 13 04:29:30.896: INFO: Logging pods the kubelet thinks is on node master1 Nov 13 04:29:30.927: INFO: kube-controller-manager-master1 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 04:29:30.927: INFO: kube-flannel-79bvx started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Init container install-cni ready: true, restart count 0 Nov 13 04:29:30.927: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 04:29:30.927: INFO: kube-multus-ds-amd64-qtmwl started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:29:30.927: INFO: coredns-8474476ff8-9vc8b started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Container coredns ready: true, restart count 2 Nov 13 04:29:30.927: INFO: node-exporter-zm5hq started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:30.927: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:29:30.927: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:29:30.927: INFO: kube-scheduler-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Container kube-scheduler ready: true, restart count 0 Nov 13 04:29:30.927: INFO: kube-apiserver-master1 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 04:29:30.927: INFO: kube-proxy-6m7qt started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:30.927: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 04:29:30.927: INFO: container-registry-65d7c44b96-qwqcz started at 2021-11-12 21:12:56 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:30.927: INFO: Container docker-registry ready: true, restart count 0 Nov 13 04:29:30.927: INFO: Container nginx ready: true, restart count 0 W1113 04:29:30.940961 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 04:29:31.007: INFO: Latency metrics for node master1 Nov 13 04:29:31.007: INFO: Logging node info for node master2 Nov 13 04:29:31.009: INFO: Node Info: &Node{ObjectMeta:{master2 9cc6c106-2749-4b3a-bbe2-d8a672ab49e0 160804 0 2021-11-12 21:06:20 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-11-12 21:16:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-11-12 21:16:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:30 +0000 UTC,LastTransitionTime:2021-11-12 21:11:30 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:06:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:08:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:65d51a0e6dc44ad1ac5d3b5cd37365f1,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:728abaee-0c5e-4ddb-a22e-72a1345c5ab6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 04:29:31.010: INFO: Logging kubelet events for node master2 Nov 13 04:29:31.012: INFO: Logging pods the kubelet thinks is on node master2 Nov 13 04:29:31.025: INFO: node-exporter-clpwc started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.025: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.025: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:29:31.025: INFO: kube-controller-manager-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container kube-controller-manager ready: true, restart count 2 Nov 13 04:29:31.025: INFO: kube-scheduler-master2 started at 2021-11-12 21:15:21 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 04:29:31.025: INFO: kube-proxy-5xbt9 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 04:29:31.025: INFO: kube-flannel-x76f4 started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Init container install-cni ready: true, restart count 0 Nov 13 04:29:31.025: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 04:29:31.025: INFO: kube-multus-ds-amd64-8zzgk started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:29:31.025: INFO: coredns-8474476ff8-s7twh started at 2021-11-12 21:09:11 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container coredns ready: true, restart count 1 Nov 13 04:29:31.025: INFO: kube-apiserver-master2 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 04:29:31.025: INFO: node-feature-discovery-controller-cff799f9f-c54h8 started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.025: INFO: Container nfd-controller ready: true, restart count 0 W1113 04:29:31.038221 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 04:29:31.106: INFO: Latency metrics for node master2 Nov 13 04:29:31.106: INFO: Logging node info for node master3 Nov 13 04:29:31.110: INFO: Node Info: &Node{ObjectMeta:{master3 fce0cd54-e4d8-4ce1-b720-522aad2d7989 160805 0 2021-11-12 21:06:31 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-11-12 21:06:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-11-12 21:08:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-11-12 21:19:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:06:31 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 04:29:21 +0000 UTC,LastTransitionTime:2021-11-12 21:11:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:592c271b4697499588d9c2b3767b866a,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:a95de4ca-c566-4b34-8463-623af932d416,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 04:29:31.110: INFO: Logging kubelet events for node master3 Nov 13 04:29:31.112: INFO: Logging pods the kubelet thinks is on node master3 Nov 13 04:29:31.122: INFO: kube-multus-ds-amd64-vp8p7 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.122: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:29:31.122: INFO: dns-autoscaler-7df78bfcfb-d88qs started at 2021-11-12 21:09:13 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.122: INFO: Container autoscaler ready: true, restart count 1 Nov 13 04:29:31.122: INFO: kube-proxy-tssd5 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.122: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 04:29:31.122: INFO: kube-flannel-vxlrs started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 04:29:31.122: INFO: Init container install-cni ready: true, restart count 0 Nov 13 04:29:31.122: INFO: Container kube-flannel ready: true, restart count 1 Nov 13 04:29:31.122: INFO: kube-scheduler-master3 started at 2021-11-12 21:06:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.123: INFO: Container kube-scheduler ready: true, restart count 2 Nov 13 04:29:31.123: INFO: node-exporter-l4x25 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.123: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.123: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:29:31.123: INFO: kube-apiserver-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.123: INFO: Container kube-apiserver ready: true, restart count 0 Nov 13 04:29:31.123: INFO: kube-controller-manager-master3 started at 2021-11-12 21:11:20 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.123: INFO: Container kube-controller-manager ready: true, restart count 3 W1113 04:29:31.137166 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 04:29:31.212: INFO: Latency metrics for node master3 Nov 13 04:29:31.212: INFO: Logging node info for node node1 Nov 13 04:29:31.217: INFO: Node Info: &Node{ObjectMeta:{node1 6ceb907c-9809-4d18-88c6-b1e10ba80f97 160971 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:21 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 04:28:18 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:27 +0000 UTC,LastTransitionTime:2021-11-12 21:11:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:29 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:29 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:29 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 04:29:29 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cf6287777fe4e3b9a80df40dea25b6d,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:2125bc5f-9167-464a-b6d0-8e8a192327d3,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003432896,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:1841df8d4cc71e4f69cc1603012b99570f40d18cd36ee1065933b46f984cf0cd alpine:3.12],SizeBytes:5592390,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 04:29:31.218: INFO: Logging kubelet events for node node1 Nov 13 04:29:31.220: INFO: Logging pods the kubelet thinks is on node node1 Nov 13 04:29:31.236: INFO: prometheus-operator-585ccfb458-qcz7s started at 2021-11-12 21:21:55 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.236: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container prometheus-operator ready: true, restart count 0 Nov 13 04:29:31.236: INFO: liveness-http started at 2021-11-13 04:27:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container liveness-http ready: false, restart count 2 Nov 13 04:29:31.236: INFO: collectd-74xkn started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 04:29:31.236: INFO: Container collectd ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.236: INFO: pod-submit-status-1-6 started at 2021-11-13 04:29:21 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container busybox ready: false, restart count 0 Nov 13 04:29:31.236: INFO: busybox-109a9a53-f559-47c5-a5e5-e7724143132e started at 2021-11-13 04:29:05 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container busybox ready: true, restart count 0 Nov 13 04:29:31.236: INFO: kube-proxy-p6kbl started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container kube-proxy ready: true, restart count 2 Nov 13 04:29:31.236: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-m62v8 started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:29:31.236: INFO: liveness-691b5db7-aaad-45a1-a595-89be6ddc80d9 started at 2021-11-13 04:27:45 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 04:29:31.236: INFO: cmk-init-discover-node1-vkj2s started at 2021-11-12 21:20:18 +0000 UTC (0+3 container statuses recorded) Nov 13 04:29:31.236: INFO: Container discover ready: false, restart count 0 Nov 13 04:29:31.236: INFO: Container init ready: false, restart count 0 Nov 13 04:29:31.236: INFO: Container install ready: false, restart count 0 Nov 13 04:29:31.236: INFO: node-feature-discovery-worker-zgr4c started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:29:31.236: INFO: cmk-webhook-6c9d5f8578-2gp25 started at 2021-11-12 21:21:01 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container cmk-webhook ready: true, restart count 0 Nov 13 04:29:31.236: INFO: pod-submit-status-2-6 started at 2021-11-13 04:29:21 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container busybox ready: false, restart count 0 Nov 13 04:29:31.236: INFO: prometheus-k8s-0 started at 2021-11-12 21:22:14 +0000 UTC (0+4 container statuses recorded) Nov 13 04:29:31.236: INFO: Container config-reloader ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container grafana ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container prometheus ready: true, restart count 1 Nov 13 04:29:31.236: INFO: startup-ab4afc2d-6d0d-4c94-ae84-c08f58fb6816 started at 2021-11-13 04:28:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container busybox ready: true, restart count 0 Nov 13 04:29:31.236: INFO: cmk-4tcdw started at 2021-11-12 21:21:00 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.236: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:29:31.236: INFO: node-exporter-hqkfs started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.236: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.236: INFO: Container node-exporter ready: true, restart count 0 Nov 13 04:29:31.236: INFO: nginx-proxy-node1 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:29:31.236: INFO: kube-flannel-r7bbp started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Init container install-cni ready: true, restart count 2 Nov 13 04:29:31.236: INFO: Container kube-flannel ready: true, restart count 3 Nov 13 04:29:31.236: INFO: kube-multus-ds-amd64-4wqsv started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.236: INFO: Container kube-multus ready: true, restart count 1 W1113 04:29:31.249235 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 04:29:31.496: INFO: Latency metrics for node node1 Nov 13 04:29:31.496: INFO: Logging node info for node node2 Nov 13 04:29:31.498: INFO: Node Info: &Node{ObjectMeta:{node2 652722dd-12b1-4529-ba4d-a00c590e4a68 160931 0 2021-11-12 21:07:36 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-11-12 21:07:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-11-12 21:08:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-11-12 21:16:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-11-12 21:20:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-11-13 01:56:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{},"f:scheduling.k8s.io/foo":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-11-13 04:28:18 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:scheduling.k8s.io/foo":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},scheduling.k8s.io/foo: {{3 0} {} 3 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-11-12 21:11:26 +0000 UTC,LastTransitionTime:2021-11-12 21:11:26 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:26 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:26 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-11-13 04:29:26 +0000 UTC,LastTransitionTime:2021-11-12 21:07:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-11-13 04:29:26 +0000 UTC,LastTransitionTime:2021-11-12 21:08:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fec67f7547064c508c27d44a9bf99ae7,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0a05ac00-ff21-4518-bf68-3564c7a8cf65,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.10,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[localhost:30500/cmk@sha256:9c8712e686132463f8f4d9787e57cff8c3c47bb77fca5fc1d96f6763d2717f29 localhost:30500/cmk:v1.5.1],SizeBytes:724566215,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:760040499bb9aa55e93c6074d538c7fb6c32ef9fc567e4edcc0ef55197276560 localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42686989,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:93c03fd0e56363624f0df9368752e6e7c270d969da058ae5066fe8d668541f51 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 13 04:29:31.499: INFO: Logging kubelet events for node node2 Nov 13 04:29:31.501: INFO: Logging pods the kubelet thinks is on node node2 Nov 13 04:29:31.517: INFO: pod-back-off-image started at 2021-11-13 04:27:54 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container back-off ready: false, restart count 3 Nov 13 04:29:31.517: INFO: liveness-exec started at 2021-11-13 04:27:58 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container liveness-exec ready: true, restart count 1 Nov 13 04:29:31.517: INFO: nginx-proxy-node2 started at 2021-11-12 21:07:36 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container nginx-proxy ready: true, restart count 2 Nov 13 04:29:31.517: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-7brrh started at 2021-11-12 21:17:59 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container kube-sriovdp ready: true, restart count 0 Nov 13 04:29:31.517: INFO: kube-multus-ds-amd64-2wqj5 started at 2021-11-12 21:08:45 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container kube-multus ready: true, restart count 1 Nov 13 04:29:31.517: INFO: kubernetes-dashboard-785dcbb76d-w2mls started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container kubernetes-dashboard ready: true, restart count 1 Nov 13 04:29:31.517: INFO: kube-proxy-pzhf2 started at 2021-11-12 21:07:39 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container kube-proxy ready: true, restart count 1 Nov 13 04:29:31.517: INFO: kube-flannel-mg66r started at 2021-11-12 21:08:36 +0000 UTC (1+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Init container install-cni ready: true, restart count 2 Nov 13 04:29:31.517: INFO: Container kube-flannel ready: true, restart count 2 Nov 13 04:29:31.517: INFO: back-off-cap started at 2021-11-13 04:29:04 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container back-off-cap ready: false, restart count 1 Nov 13 04:29:31.517: INFO: privileged-pod started at 2021-11-13 04:29:21 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.517: INFO: Container not-privileged-container ready: true, restart count 0 Nov 13 04:29:31.517: INFO: Container privileged-container ready: true, restart count 0 Nov 13 04:29:31.517: INFO: pod-ready started at 2021-11-13 04:29:06 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container pod-readiness-gate ready: true, restart count 0 Nov 13 04:29:31.517: INFO: node-feature-discovery-worker-mm7xs started at 2021-11-12 21:16:36 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container nfd-worker ready: true, restart count 0 Nov 13 04:29:31.517: INFO: collectd-mp2z6 started at 2021-11-12 21:25:58 +0000 UTC (0+3 container statuses recorded) Nov 13 04:29:31.517: INFO: Container collectd ready: true, restart count 0 Nov 13 04:29:31.517: INFO: Container collectd-exporter ready: true, restart count 0 Nov 13 04:29:31.517: INFO: Container rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.517: INFO: implicit-nonroot-uid started at 2021-11-13 04:29:22 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container implicit-nonroot-uid ready: false, restart count 0 Nov 13 04:29:31.517: INFO: startup-3a526bd7-82a1-4b14-aa49-8a32e9eb49f3 started at 2021-11-13 04:28:10 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container busybox ready: false, restart count 0 Nov 13 04:29:31.517: INFO: cmk-qhvr7 started at 2021-11-12 21:21:01 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.517: INFO: Container nodereport ready: true, restart count 0 Nov 13 04:29:31.517: INFO: Container reconcile ready: true, restart count 0 Nov 13 04:29:31.517: INFO: pod-submit-remove-619e9f5e-42e8-4c16-b6f2-418039e2210a started at 2021-11-13 04:29:27 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container agnhost-container ready: true, restart count 0 Nov 13 04:29:31.517: INFO: cmk-init-discover-node2-5f4hp started at 2021-11-12 21:20:38 +0000 UTC (0+3 container statuses recorded) Nov 13 04:29:31.517: INFO: Container discover ready: false, restart count 0 Nov 13 04:29:31.517: INFO: Container init ready: false, restart count 0 Nov 13 04:29:31.517: INFO: Container install ready: false, restart count 0 Nov 13 04:29:31.517: INFO: startup-d0548019-7372-4a57-80c2-dc43f86948c0 started at 2021-11-13 04:29:28 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container busybox ready: false, restart count 0 Nov 13 04:29:31.517: INFO: tas-telemetry-aware-scheduling-84ff454dfb-q7m54 started at 2021-11-12 21:25:09 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container tas-extender ready: true, restart count 0 Nov 13 04:29:31.517: INFO: kubernetes-metrics-scraper-5558854cb-jmbpk started at 2021-11-12 21:09:15 +0000 UTC (0+1 container statuses recorded) Nov 13 04:29:31.517: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Nov 13 04:29:31.517: INFO: node-exporter-hstd9 started at 2021-11-12 21:22:03 +0000 UTC (0+2 container statuses recorded) Nov 13 04:29:31.517: INFO: Container kube-rbac-proxy ready: true, restart count 0 Nov 13 04:29:31.517: INFO: Container node-exporter ready: true, restart count 0 W1113 04:29:31.532113 38 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Nov 13 04:29:31.706: INFO: Latency metrics for node node2 Nov 13 04:29:31.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8568" for this suite. •! Panic [2.876 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc0014a0f0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0042409c0, 0xc0014a0f00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc001b2cfc0, 0xc0042409c0, 0xc000d31b60, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc001b2cfc0, 0xc0042409c0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc001b2cfc0, 0xc0042409c0, 0xc001b2cfc0, 0xc0042409c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0042409c0, 0x14, 0xc003b0d8c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc0022b6f20, 0xc000055b78, 0x14, 0xc003b0d8c0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00078b080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00078b080) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00078b080, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:31.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Nov 13 04:29:31.861: INFO: Waiting up to 5m0s for pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774" in namespace "security-context-test-3267" to be "Succeeded or Failed" Nov 13 04:29:31.864: INFO: Pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774": Phase="Pending", Reason="", readiness=false. Elapsed: 3.088401ms Nov 13 04:29:33.867: INFO: Pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005619445s Nov 13 04:29:35.870: INFO: Pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008924845s Nov 13 04:29:37.874: INFO: Pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774": Phase="Pending", Reason="", readiness=false. Elapsed: 6.012945458s Nov 13 04:29:39.878: INFO: Pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.017386963s Nov 13 04:29:39.878: INFO: Pod "busybox-user-0-bd6b740e-4d4e-4e96-bb56-985db8a16774" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:39.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3267" for this suite. • [SLOW TEST:8.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:40.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Nov 13 04:29:40.326: INFO: Waiting up to 5m0s for pod "downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5" in namespace "downward-api-2198" to be "Succeeded or Failed" Nov 13 04:29:40.328: INFO: Pod "downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222973ms Nov 13 04:29:42.331: INFO: Pod "downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005334016s STEP: Saw pod success Nov 13 04:29:42.331: INFO: Pod "downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5" satisfied condition "Succeeded or Failed" Nov 13 04:29:42.333: INFO: Trying to get logs from node node2 pod downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5 container dapi-container: STEP: delete the pod Nov 13 04:29:42.345: INFO: Waiting for pod downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5 to disappear Nov 13 04:29:42.347: INFO: Pod downward-api-1f1ade0e-8035-4987-9d08-dad71e0206e5 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:42.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2198" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":7,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:42.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Nov 13 04:29:43.005: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:43.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-14" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ S ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:43.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Nov 13 04:29:43.051: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:43.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-6776" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.040 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:27.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Nov 13 04:29:36.355: INFO: start=2021-11-13 04:29:31.32603194 +0000 UTC m=+107.475362765, now=2021-11-13 04:29:36.355695119 +0000 UTC m=+112.505025970, kubelet pod: {"metadata":{"name":"pod-submit-remove-619e9f5e-42e8-4c16-b6f2-418039e2210a","namespace":"pods-826","uid":"2e18dc16-c593-42f3-a6c7-cb8aaded94a3","resourceVersion":"160943","creationTimestamp":"2021-11-13T04:29:27Z","deletionTimestamp":"2021-11-13T04:30:01Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"298663135"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.69\"\n ],\n \"mac\": \"c2:3c:30:57:32:2d\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.69\"\n ],\n \"mac\": \"c2:3c:30:57:32:2d\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-13T04:29:27.468908071Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-13T04:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-x66qz","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-x66qz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:27Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:34Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:34Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:27Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.69","podIPs":[{"ip":"10.244.4.69"}],"startTime":"2021-11-13T04:29:27Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-11-13T04:29:30Z","finishedAt":"2021-11-13T04:29:33Z","containerID":"docker://e3c5335f230d87039382f2e348f37b95d20cc48157279f14c9261c1a0261fc40"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://e3c5335f230d87039382f2e348f37b95d20cc48157279f14c9261c1a0261fc40","started":false}],"qosClass":"BestEffort"}} Nov 13 04:29:41.341: INFO: start=2021-11-13 04:29:31.32603194 +0000 UTC m=+107.475362765, now=2021-11-13 04:29:41.341269228 +0000 UTC m=+117.490600209, kubelet pod: {"metadata":{"name":"pod-submit-remove-619e9f5e-42e8-4c16-b6f2-418039e2210a","namespace":"pods-826","uid":"2e18dc16-c593-42f3-a6c7-cb8aaded94a3","resourceVersion":"160943","creationTimestamp":"2021-11-13T04:29:27Z","deletionTimestamp":"2021-11-13T04:30:01Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"298663135"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.69\"\n ],\n \"mac\": \"c2:3c:30:57:32:2d\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.69\"\n ],\n \"mac\": \"c2:3c:30:57:32:2d\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-11-13T04:29:27.468908071Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-11-13T04:29:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-x66qz","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-x66qz","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:27Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:34Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:34Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-11-13T04:29:27Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.69","podIPs":[{"ip":"10.244.4.69"}],"startTime":"2021-11-13T04:29:27Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2021-11-13T04:29:30Z","finishedAt":"2021-11-13T04:29:33Z","containerID":"docker://e3c5335f230d87039382f2e348f37b95d20cc48157279f14c9261c1a0261fc40"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://e3c5335f230d87039382f2e348f37b95d20cc48157279f14c9261c1a0261fc40","started":false}],"qosClass":"BestEffort"}} Nov 13 04:29:46.343: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:46.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-826" for this suite. • [SLOW TEST:19.080 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":5,"skipped":369,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:43.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 13 04:29:47.304: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:47.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6219" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":8,"skipped":1061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:47.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Nov 13 04:29:47.616: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:47.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-4130" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:46.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:50.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-787" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":6,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:47.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Nov 13 04:29:47.952: INFO: Waiting up to 5m0s for pod "security-context-1ee967ab-a28c-4a63-b7e3-660392322aba" in namespace "security-context-9357" to be "Succeeded or Failed" Nov 13 04:29:47.954: INFO: Pod "security-context-1ee967ab-a28c-4a63-b7e3-660392322aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123646ms Nov 13 04:29:49.959: INFO: Pod "security-context-1ee967ab-a28c-4a63-b7e3-660392322aba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006437761s Nov 13 04:29:51.964: INFO: Pod "security-context-1ee967ab-a28c-4a63-b7e3-660392322aba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011335361s STEP: Saw pod success Nov 13 04:29:51.964: INFO: Pod "security-context-1ee967ab-a28c-4a63-b7e3-660392322aba" satisfied condition "Succeeded or Failed" Nov 13 04:29:51.966: INFO: Trying to get logs from node node2 pod security-context-1ee967ab-a28c-4a63-b7e3-660392322aba container test-container: STEP: delete the pod Nov 13 04:29:51.977: INFO: Waiting for pod security-context-1ee967ab-a28c-4a63-b7e3-660392322aba to disappear Nov 13 04:29:51.979: INFO: Pod security-context-1ee967ab-a28c-4a63-b7e3-660392322aba no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:51.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9357" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":9,"skipped":1378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:52.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Nov 13 04:29:52.292: INFO: Waiting up to 5m0s for pod "security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c" in namespace "security-context-1534" to be "Succeeded or Failed" Nov 13 04:29:52.296: INFO: Pod "security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018626ms Nov 13 04:29:54.300: INFO: Pod "security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007446514s Nov 13 04:29:56.304: INFO: Pod "security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011587154s STEP: Saw pod success Nov 13 04:29:56.304: INFO: Pod "security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c" satisfied condition "Succeeded or Failed" Nov 13 04:29:56.307: INFO: Trying to get logs from node node1 pod security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c container test-container: STEP: delete the pod Nov 13 04:29:56.321: INFO: Waiting for pod security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c to disappear Nov 13 04:29:56.323: INFO: Pod security-context-5b5e0f51-6220-4d4c-9e02-ecbfc453025c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:56.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1534" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":10,"skipped":1532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:56.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:29:56.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-849" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":11,"skipped":1572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:05.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-109a9a53-f559-47c5-a5e5-e7724143132e in namespace container-probe-7903 Nov 13 04:29:09.959: INFO: Started pod busybox-109a9a53-f559-47c5-a5e5-e7724143132e in namespace container-probe-7903 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:29:09.961: INFO: Initial restart count of pod busybox-109a9a53-f559-47c5-a5e5-e7724143132e is 0 Nov 13 04:30:00.056: INFO: Restart count of pod container-probe-7903/busybox-109a9a53-f559-47c5-a5e5-e7724143132e is now 1 (50.094990491s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:30:00.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7903" for this suite. • [SLOW TEST:54.153 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":3,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:51.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-967a8e04-6e72-4892-a850-a2879db7f853 bar STEP: verifying the node has the label fizz-c940d2f5-2b66-4ef1-9233-9ed99f3b8920 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-c940d2f5-2b66-4ef1-9233-9ed99f3b8920 off the node node1 STEP: verifying the node doesn't have the label fizz-c940d2f5-2b66-4ef1-9233-9ed99f3b8920 STEP: removing the label foo-967a8e04-6e72-4892-a850-a2879db7f853 off the node node1 STEP: verifying the node doesn't have the label foo-967a8e04-6e72-4892-a850-a2879db7f853 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:30:01.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-6865" for this suite. • [SLOW TEST:10.126 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":7,"skipped":806,"failed":0} SSSSSSSSSSS ------------------------------ Nov 13 04:30:01.354: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:56.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Nov 13 04:29:56.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3306 create -f -' Nov 13 04:29:56.982: INFO: stderr: "" Nov 13 04:29:56.982: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Nov 13 04:30:02.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3306 logs dapi-test-pod test-container' Nov 13 04:30:03.168: INFO: stderr: "" Nov 13 04:30:03.168: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3306\nMY_POD_IP=10.244.4.74\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Nov 13 04:30:03.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3306 logs dapi-test-pod test-container' Nov 13 04:30:03.329: INFO: stderr: "" Nov 13 04:30:03.329: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-3306\nMY_POD_IP=10.244.4.74\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:30:03.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3306" for this suite. • [SLOW TEST:6.807 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":12,"skipped":1615,"failed":0} Nov 13 04:30:03.339: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:59.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-ab4afc2d-6d0d-4c94-ae84-c08f58fb6816 in namespace container-probe-9344 Nov 13 04:29:05.669: INFO: Started pod startup-ab4afc2d-6d0d-4c94-ae84-c08f58fb6816 in namespace container-probe-9344 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:29:05.672: INFO: Initial restart count of pod startup-ab4afc2d-6d0d-4c94-ae84-c08f58fb6816 is 0 Nov 13 04:30:03.793: INFO: Restart count of pod container-probe-9344/startup-ab4afc2d-6d0d-4c94-ae84-c08f58fb6816 is now 1 (58.120647224s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:30:03.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9344" for this suite. • [SLOW TEST:64.188 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":7,"skipped":518,"failed":0} Nov 13 04:30:03.812: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:30:00.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-e3c07a48-4c00-488c-9cec-5b7fd3837cc5 in namespace container-probe-1499 Nov 13 04:30:06.183: INFO: Started pod busybox-e3c07a48-4c00-488c-9cec-5b7fd3837cc5 in namespace container-probe-1499 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:30:06.185: INFO: Initial restart count of pod busybox-e3c07a48-4c00-488c-9cec-5b7fd3837cc5 is 0 Nov 13 04:30:54.290: INFO: Restart count of pod container-probe-1499/busybox-e3c07a48-4c00-488c-9cec-5b7fd3837cc5 is now 1 (48.104513131s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:30:54.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1499" for this suite. • [SLOW TEST:54.163 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:19.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Nov 13 04:28:24.888: INFO: watch delete seen for pod-submit-status-1-0 Nov 13 04:28:24.888: INFO: Pod pod-submit-status-1-0 on node node1 timings total=5.435877651s t=748ms run=0s execute=0s Nov 13 04:28:25.273: INFO: watch delete seen for pod-submit-status-2-0 Nov 13 04:28:25.273: INFO: Pod pod-submit-status-2-0 on node node1 timings total=5.820307576s t=1.51s run=0s execute=0s Nov 13 04:28:26.716: INFO: watch delete seen for pod-submit-status-0-0 Nov 13 04:28:26.716: INFO: Pod pod-submit-status-0-0 on node node2 timings total=7.263522573s t=1.498s run=0s execute=0s Nov 13 04:28:27.553: INFO: watch delete seen for pod-submit-status-0-1 Nov 13 04:28:27.553: INFO: Pod pod-submit-status-0-1 on node node1 timings total=836.781469ms t=726ms run=0s execute=0s Nov 13 04:28:28.421: INFO: watch delete seen for pod-submit-status-1-1 Nov 13 04:28:28.421: INFO: Pod pod-submit-status-1-1 on node node1 timings total=3.532568392s t=172ms run=0s execute=0s Nov 13 04:28:31.961: INFO: watch delete seen for pod-submit-status-0-2 Nov 13 04:28:31.961: INFO: Pod pod-submit-status-0-2 on node node2 timings total=4.408099741s t=1.723s run=0s execute=0s Nov 13 04:28:41.355: INFO: watch delete seen for pod-submit-status-2-1 Nov 13 04:28:41.355: INFO: Pod pod-submit-status-2-1 on node node1 timings total=16.082057736s t=1.634s run=0s execute=0s Nov 13 04:28:41.364: INFO: watch delete seen for pod-submit-status-0-3 Nov 13 04:28:41.364: INFO: Pod pod-submit-status-0-3 on node node1 timings total=9.4032957s t=1.602s run=0s execute=0s Nov 13 04:28:42.963: INFO: watch delete seen for pod-submit-status-1-2 Nov 13 04:28:42.963: INFO: Pod pod-submit-status-1-2 on node node2 timings total=14.541568884s t=1.054s run=0s execute=0s Nov 13 04:28:44.323: INFO: watch delete seen for pod-submit-status-0-4 Nov 13 04:28:44.323: INFO: Pod pod-submit-status-0-4 on node node1 timings total=2.958923415s t=329ms run=0s execute=0s Nov 13 04:28:45.751: INFO: watch delete seen for pod-submit-status-1-3 Nov 13 04:28:45.751: INFO: Pod pod-submit-status-1-3 on node node1 timings total=2.788525656s t=500ms run=0s execute=0s Nov 13 04:28:51.356: INFO: watch delete seen for pod-submit-status-2-2 Nov 13 04:28:51.357: INFO: Pod pod-submit-status-2-2 on node node1 timings total=10.001459155s t=249ms run=0s execute=0s Nov 13 04:28:56.962: INFO: watch delete seen for pod-submit-status-0-5 Nov 13 04:28:56.962: INFO: Pod pod-submit-status-0-5 on node node2 timings total=12.638799878s t=1.228s run=0s execute=0s Nov 13 04:29:01.565: INFO: watch delete seen for pod-submit-status-2-3 Nov 13 04:29:01.565: INFO: Pod pod-submit-status-2-3 on node node2 timings total=10.208515244s t=1.95s run=0s execute=0s Nov 13 04:29:05.561: INFO: watch delete seen for pod-submit-status-1-4 Nov 13 04:29:05.561: INFO: Pod pod-submit-status-1-4 on node node2 timings total=19.809619396s t=1.801s run=0s execute=0s Nov 13 04:29:11.373: INFO: watch delete seen for pod-submit-status-2-4 Nov 13 04:29:11.373: INFO: Pod pod-submit-status-2-4 on node node1 timings total=9.808116522s t=1.564s run=0s execute=0s Nov 13 04:29:11.392: INFO: watch delete seen for pod-submit-status-0-6 Nov 13 04:29:11.392: INFO: Pod pod-submit-status-0-6 on node node1 timings total=14.430012625s t=396ms run=0s execute=0s Nov 13 04:29:14.363: INFO: watch delete seen for pod-submit-status-0-7 Nov 13 04:29:14.363: INFO: Pod pod-submit-status-0-7 on node node2 timings total=2.970746521s t=398ms run=0s execute=0s Nov 13 04:29:21.357: INFO: watch delete seen for pod-submit-status-1-5 Nov 13 04:29:21.357: INFO: Pod pod-submit-status-1-5 on node node1 timings total=15.796022762s t=1.302s run=0s execute=0s Nov 13 04:29:21.365: INFO: watch delete seen for pod-submit-status-0-8 Nov 13 04:29:21.365: INFO: Pod pod-submit-status-0-8 on node node1 timings total=7.002117287s t=607ms run=0s execute=0s Nov 13 04:29:21.444: INFO: watch delete seen for pod-submit-status-2-5 Nov 13 04:29:21.444: INFO: Pod pod-submit-status-2-5 on node node2 timings total=10.071113024s t=282ms run=0s execute=0s Nov 13 04:29:31.351: INFO: watch delete seen for pod-submit-status-2-6 Nov 13 04:29:31.351: INFO: Pod pod-submit-status-2-6 on node node1 timings total=9.906297511s t=1.903s run=3s execute=0s Nov 13 04:29:31.357: INFO: watch delete seen for pod-submit-status-1-6 Nov 13 04:29:31.358: INFO: Pod pod-submit-status-1-6 on node node1 timings total=10.000420756s t=1.048s run=0s execute=0s Nov 13 04:29:31.447: INFO: watch delete seen for pod-submit-status-0-9 Nov 13 04:29:31.447: INFO: Pod pod-submit-status-0-9 on node node2 timings total=10.081898891s t=764ms run=0s execute=0s Nov 13 04:29:36.158: INFO: watch delete seen for pod-submit-status-1-7 Nov 13 04:29:36.158: INFO: Pod pod-submit-status-1-7 on node node1 timings total=4.800036605s t=369ms run=0s execute=0s Nov 13 04:29:36.166: INFO: watch delete seen for pod-submit-status-0-10 Nov 13 04:29:36.166: INFO: Pod pod-submit-status-0-10 on node node1 timings total=4.718331408s t=1.633s run=0s execute=0s Nov 13 04:29:39.320: INFO: watch delete seen for pod-submit-status-2-7 Nov 13 04:29:39.320: INFO: Pod pod-submit-status-2-7 on node node1 timings total=7.969599715s t=1.649s run=0s execute=0s Nov 13 04:29:50.351: INFO: watch delete seen for pod-submit-status-0-11 Nov 13 04:29:50.351: INFO: Pod pod-submit-status-0-11 on node node1 timings total=14.185355279s t=143ms run=0s execute=0s Nov 13 04:29:51.443: INFO: watch delete seen for pod-submit-status-1-8 Nov 13 04:29:51.443: INFO: Pod pod-submit-status-1-8 on node node2 timings total=15.28532122s t=74ms run=0s execute=0s Nov 13 04:30:00.530: INFO: watch delete seen for pod-submit-status-2-8 Nov 13 04:30:00.530: INFO: Pod pod-submit-status-2-8 on node node1 timings total=21.209462834s t=1.035s run=0s execute=0s Nov 13 04:30:01.351: INFO: watch delete seen for pod-submit-status-0-12 Nov 13 04:30:01.351: INFO: Pod pod-submit-status-0-12 on node node1 timings total=11.000278513s t=536ms run=0s execute=0s Nov 13 04:30:01.438: INFO: watch delete seen for pod-submit-status-1-9 Nov 13 04:30:01.438: INFO: Pod pod-submit-status-1-9 on node node2 timings total=9.995291873s t=1.672s run=0s execute=0s Nov 13 04:30:06.787: INFO: watch delete seen for pod-submit-status-2-9 Nov 13 04:30:06.787: INFO: Pod pod-submit-status-2-9 on node node1 timings total=6.257282977s t=364ms run=0s execute=0s Nov 13 04:30:11.353: INFO: watch delete seen for pod-submit-status-1-10 Nov 13 04:30:11.353: INFO: Pod pod-submit-status-1-10 on node node1 timings total=9.914242291s t=1.54s run=0s execute=0s Nov 13 04:30:11.361: INFO: watch delete seen for pod-submit-status-0-13 Nov 13 04:30:11.361: INFO: Pod pod-submit-status-0-13 on node node1 timings total=10.009619929s t=794ms run=0s execute=0s Nov 13 04:30:13.627: INFO: watch delete seen for pod-submit-status-0-14 Nov 13 04:30:13.627: INFO: Pod pod-submit-status-0-14 on node node1 timings total=2.266212062s t=391ms run=0s execute=0s Nov 13 04:30:21.354: INFO: watch delete seen for pod-submit-status-1-11 Nov 13 04:30:21.354: INFO: Pod pod-submit-status-1-11 on node node1 timings total=10.001441986s t=458ms run=0s execute=0s Nov 13 04:30:21.448: INFO: watch delete seen for pod-submit-status-2-10 Nov 13 04:30:21.448: INFO: Pod pod-submit-status-2-10 on node node2 timings total=14.660302549s t=769ms run=0s execute=0s Nov 13 04:30:24.402: INFO: watch delete seen for pod-submit-status-2-11 Nov 13 04:30:24.402: INFO: Pod pod-submit-status-2-11 on node node2 timings total=2.954358009s t=477ms run=0s execute=0s Nov 13 04:30:24.567: INFO: watch delete seen for pod-submit-status-1-12 Nov 13 04:30:24.567: INFO: Pod pod-submit-status-1-12 on node node1 timings total=3.212721168s t=611ms run=0s execute=0s Nov 13 04:30:31.357: INFO: watch delete seen for pod-submit-status-2-12 Nov 13 04:30:31.358: INFO: Pod pod-submit-status-2-12 on node node1 timings total=6.955294528s t=837ms run=0s execute=0s Nov 13 04:30:41.448: INFO: watch delete seen for pod-submit-status-2-13 Nov 13 04:30:41.448: INFO: Pod pod-submit-status-2-13 on node node2 timings total=10.090471446s t=1.65s run=2s execute=0s Nov 13 04:30:44.364: INFO: watch delete seen for pod-submit-status-2-14 Nov 13 04:30:44.364: INFO: Pod pod-submit-status-2-14 on node node2 timings total=2.915973465s t=709ms run=0s execute=0s Nov 13 04:31:01.009: INFO: watch delete seen for pod-submit-status-1-13 Nov 13 04:31:01.009: INFO: Pod pod-submit-status-1-13 on node node1 timings total=36.441662326s t=1.994s run=0s execute=0s Nov 13 04:31:11.451: INFO: watch delete seen for pod-submit-status-1-14 Nov 13 04:31:11.451: INFO: Pod pod-submit-status-1-14 on node node2 timings total=10.442523269s t=1.116s run=2s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:31:11.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2314" for this suite. • [SLOW TEST:172.036 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":6,"skipped":666,"failed":0} Nov 13 04:31:11.466: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:45.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-691b5db7-aaad-45a1-a595-89be6ddc80d9 in namespace container-probe-5823 Nov 13 04:27:49.815: INFO: Started pod liveness-691b5db7-aaad-45a1-a595-89be6ddc80d9 in namespace container-probe-5823 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:27:49.818: INFO: Initial restart count of pod liveness-691b5db7-aaad-45a1-a595-89be6ddc80d9 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:31:50.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5823" for this suite. • [SLOW TEST:244.602 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":2,"skipped":32,"failed":0} Nov 13 04:31:50.382: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:27:54.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Nov 13 04:27:54.437: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:27:56.442: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:27:58.441: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:00.440: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:02.444: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:28:04.440: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Nov 13 04:29:09.459: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-11-13 04:28:39 +0000 UTC restartedAt=2021-11-13 04:29:04 +0000 UTC (25s) STEP: getting restart delay-1 Nov 13 04:30:03.654: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-11-13 04:29:09 +0000 UTC restartedAt=2021-11-13 04:30:02 +0000 UTC (53s) STEP: getting restart delay-2 Nov 13 04:31:31.010: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-11-13 04:30:07 +0000 UTC restartedAt=2021-11-13 04:31:29 +0000 UTC (1m22s) STEP: updating the image Nov 13 04:31:31.523: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Nov 13 04:31:53.570: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-13 04:31:40 +0000 UTC restartedAt=2021-11-13 04:31:52 +0000 UTC (12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:31:53.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3194" for this suite. • [SLOW TEST:239.177 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":2,"skipped":292,"failed":0} Nov 13 04:31:53.581: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:10.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-3a526bd7-82a1-4b14-aa49-8a32e9eb49f3 in namespace container-probe-1503 Nov 13 04:28:16.486: INFO: Started pod startup-3a526bd7-82a1-4b14-aa49-8a32e9eb49f3 in namespace container-probe-1503 STEP: checking the pod's current state and verifying that restartCount is present Nov 13 04:28:16.488: INFO: Initial restart count of pod startup-3a526bd7-82a1-4b14-aa49-8a32e9eb49f3 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:32:17.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1503" for this suite. • [SLOW TEST:246.589 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":5,"skipped":668,"failed":0} Nov 13 04:32:17.033: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:28:46.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Nov 13 04:28:47.011: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Nov 13 04:28:48.023: INFO: node status heartbeat is unchanged for 1.003513293s, waiting for 1m20s Nov 13 04:28:49.024: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:28:49.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:48 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:48 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:38 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:48 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:28:50.023: INFO: node status heartbeat is unchanged for 999.979229ms, waiting for 1m20s Nov 13 04:28:51.023: INFO: node status heartbeat is unchanged for 1.999492819s, waiting for 1m20s Nov 13 04:28:52.023: INFO: node status heartbeat is unchanged for 2.9994836s, waiting for 1m20s Nov 13 04:28:53.023: INFO: node status heartbeat is unchanged for 3.999592032s, waiting for 1m20s Nov 13 04:28:54.023: INFO: node status heartbeat is unchanged for 4.999496454s, waiting for 1m20s Nov 13 04:28:55.025: INFO: node status heartbeat is unchanged for 6.001606059s, waiting for 1m20s Nov 13 04:28:56.024: INFO: node status heartbeat is unchanged for 7.000953918s, waiting for 1m20s Nov 13 04:28:57.024: INFO: node status heartbeat is unchanged for 8.000971286s, waiting for 1m20s Nov 13 04:28:58.024: INFO: node status heartbeat is unchanged for 9.00026679s, waiting for 1m20s Nov 13 04:28:59.023: INFO: node status heartbeat is unchanged for 9.99917991s, waiting for 1m20s Nov 13 04:29:00.023: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 13 04:29:00.027: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:48 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:29:01.024: INFO: node status heartbeat is unchanged for 1.001037811s, waiting for 1m20s Nov 13 04:29:02.023: INFO: node status heartbeat is unchanged for 2.000763495s, waiting for 1m20s Nov 13 04:29:03.024: INFO: node status heartbeat is unchanged for 3.001682402s, waiting for 1m20s Nov 13 04:29:04.022: INFO: node status heartbeat is unchanged for 3.999762219s, waiting for 1m20s Nov 13 04:29:05.025: INFO: node status heartbeat is unchanged for 5.002611075s, waiting for 1m20s Nov 13 04:29:06.023: INFO: node status heartbeat is unchanged for 5.999948419s, waiting for 1m20s Nov 13 04:29:07.024: INFO: node status heartbeat is unchanged for 7.000975615s, waiting for 1m20s Nov 13 04:29:08.024: INFO: node status heartbeat is unchanged for 8.001007496s, waiting for 1m20s Nov 13 04:29:09.025: INFO: node status heartbeat is unchanged for 9.002188856s, waiting for 1m20s Nov 13 04:29:10.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:29:10.030: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:28:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:29:11.023: INFO: node status heartbeat is unchanged for 998.316521ms, waiting for 1m20s Nov 13 04:29:12.025: INFO: node status heartbeat is unchanged for 2.000117919s, waiting for 1m20s Nov 13 04:29:13.024: INFO: node status heartbeat is unchanged for 2.999305484s, waiting for 1m20s Nov 13 04:29:14.022: INFO: node status heartbeat is unchanged for 3.997367951s, waiting for 1m20s Nov 13 04:29:15.027: INFO: node status heartbeat is unchanged for 5.002366339s, waiting for 1m20s Nov 13 04:29:16.024: INFO: node status heartbeat is unchanged for 5.998982462s, waiting for 1m20s Nov 13 04:29:17.025: INFO: node status heartbeat is unchanged for 7.000339473s, waiting for 1m20s Nov 13 04:29:18.024: INFO: node status heartbeat is unchanged for 7.999392975s, waiting for 1m20s Nov 13 04:29:19.023: INFO: node status heartbeat is unchanged for 8.998472807s, waiting for 1m20s Nov 13 04:29:20.024: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:29:20.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:29:21.025: INFO: node status heartbeat is unchanged for 1.00076665s, waiting for 1m20s Nov 13 04:29:22.024: INFO: node status heartbeat is unchanged for 1.999658029s, waiting for 1m20s Nov 13 04:29:23.025: INFO: node status heartbeat is unchanged for 3.001281197s, waiting for 1m20s Nov 13 04:29:24.022: INFO: node status heartbeat is unchanged for 3.998495254s, waiting for 1m20s Nov 13 04:29:25.027: INFO: node status heartbeat is unchanged for 5.003292053s, waiting for 1m20s Nov 13 04:29:26.025: INFO: node status heartbeat is unchanged for 6.000718086s, waiting for 1m20s Nov 13 04:29:27.025: INFO: node status heartbeat is unchanged for 7.000919557s, waiting for 1m20s Nov 13 04:29:28.023: INFO: node status heartbeat is unchanged for 7.998856229s, waiting for 1m20s Nov 13 04:29:29.024: INFO: node status heartbeat is unchanged for 9.000524024s, waiting for 1m20s Nov 13 04:29:30.027: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:29:30.031: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:29:31.024: INFO: node status heartbeat is unchanged for 997.360366ms, waiting for 1m20s Nov 13 04:29:32.023: INFO: node status heartbeat is unchanged for 1.996202088s, waiting for 1m20s Nov 13 04:29:33.022: INFO: node status heartbeat is unchanged for 2.995901194s, waiting for 1m20s Nov 13 04:29:34.024: INFO: node status heartbeat is unchanged for 3.997387925s, waiting for 1m20s Nov 13 04:29:35.025: INFO: node status heartbeat is unchanged for 4.998644193s, waiting for 1m20s Nov 13 04:29:36.023: INFO: node status heartbeat is unchanged for 5.996252645s, waiting for 1m20s Nov 13 04:29:37.023: INFO: node status heartbeat is unchanged for 6.996432342s, waiting for 1m20s Nov 13 04:29:38.024: INFO: node status heartbeat is unchanged for 7.997781968s, waiting for 1m20s Nov 13 04:29:39.023: INFO: node status heartbeat is unchanged for 8.996443938s, waiting for 1m20s Nov 13 04:29:40.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:29:40.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:29:41.024: INFO: node status heartbeat is unchanged for 998.879211ms, waiting for 1m20s Nov 13 04:29:42.023: INFO: node status heartbeat is unchanged for 1.998689368s, waiting for 1m20s Nov 13 04:29:43.026: INFO: node status heartbeat is unchanged for 3.001268538s, waiting for 1m20s Nov 13 04:29:44.024: INFO: node status heartbeat is unchanged for 3.998907346s, waiting for 1m20s Nov 13 04:29:45.024: INFO: node status heartbeat is unchanged for 4.999605003s, waiting for 1m20s Nov 13 04:29:46.028: INFO: node status heartbeat is unchanged for 6.003354492s, waiting for 1m20s Nov 13 04:29:47.024: INFO: node status heartbeat is unchanged for 6.999512267s, waiting for 1m20s Nov 13 04:29:48.024: INFO: node status heartbeat is unchanged for 7.998882843s, waiting for 1m20s Nov 13 04:29:49.024: INFO: node status heartbeat is unchanged for 8.999587455s, waiting for 1m20s Nov 13 04:29:50.024: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:29:50.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:29:51.023: INFO: node status heartbeat is unchanged for 998.928885ms, waiting for 1m20s Nov 13 04:29:52.022: INFO: node status heartbeat is unchanged for 1.998328642s, waiting for 1m20s Nov 13 04:29:53.023: INFO: node status heartbeat is unchanged for 2.999113887s, waiting for 1m20s Nov 13 04:29:54.022: INFO: node status heartbeat is unchanged for 3.998542622s, waiting for 1m20s Nov 13 04:29:55.025: INFO: node status heartbeat is unchanged for 5.001687832s, waiting for 1m20s Nov 13 04:29:56.023: INFO: node status heartbeat is unchanged for 5.998829944s, waiting for 1m20s Nov 13 04:29:57.023: INFO: node status heartbeat is unchanged for 6.998926901s, waiting for 1m20s Nov 13 04:29:58.024: INFO: node status heartbeat is unchanged for 8.000705561s, waiting for 1m20s Nov 13 04:29:59.023: INFO: node status heartbeat is unchanged for 8.998954484s, waiting for 1m20s Nov 13 04:30:00.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:30:00.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:30:01.023: INFO: node status heartbeat is unchanged for 999.365658ms, waiting for 1m20s Nov 13 04:30:02.023: INFO: node status heartbeat is unchanged for 1.999580975s, waiting for 1m20s Nov 13 04:30:03.022: INFO: node status heartbeat is unchanged for 2.99928327s, waiting for 1m20s Nov 13 04:30:04.023: INFO: node status heartbeat is unchanged for 3.999775635s, waiting for 1m20s Nov 13 04:30:05.023: INFO: node status heartbeat is unchanged for 4.999843232s, waiting for 1m20s Nov 13 04:30:06.022: INFO: node status heartbeat is unchanged for 5.998960603s, waiting for 1m20s Nov 13 04:30:07.023: INFO: node status heartbeat is unchanged for 7.00024929s, waiting for 1m20s Nov 13 04:30:08.026: INFO: node status heartbeat is unchanged for 8.00320942s, waiting for 1m20s Nov 13 04:30:09.022: INFO: node status heartbeat is unchanged for 8.999228107s, waiting for 1m20s Nov 13 04:30:10.026: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:30:10.030: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:29:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:30:11.025: INFO: node status heartbeat is unchanged for 999.147086ms, waiting for 1m20s Nov 13 04:30:12.024: INFO: node status heartbeat is unchanged for 1.998728993s, waiting for 1m20s Nov 13 04:30:13.039: INFO: node status heartbeat is unchanged for 3.013983387s, waiting for 1m20s Nov 13 04:30:14.024: INFO: node status heartbeat is unchanged for 3.998230531s, waiting for 1m20s Nov 13 04:30:15.023: INFO: node status heartbeat is unchanged for 4.997592386s, waiting for 1m20s Nov 13 04:30:16.023: INFO: node status heartbeat is unchanged for 5.997510365s, waiting for 1m20s Nov 13 04:30:17.023: INFO: node status heartbeat is unchanged for 6.997463451s, waiting for 1m20s Nov 13 04:30:18.024: INFO: node status heartbeat is unchanged for 7.998219106s, waiting for 1m20s Nov 13 04:30:19.023: INFO: node status heartbeat is unchanged for 8.99785598s, waiting for 1m20s Nov 13 04:30:20.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:30:20.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:30:21.023: INFO: node status heartbeat is unchanged for 1.000392313s, waiting for 1m20s Nov 13 04:30:22.024: INFO: node status heartbeat is unchanged for 2.000809973s, waiting for 1m20s Nov 13 04:30:23.023: INFO: node status heartbeat is unchanged for 2.999958965s, waiting for 1m20s Nov 13 04:30:24.024: INFO: node status heartbeat is unchanged for 4.001335724s, waiting for 1m20s Nov 13 04:30:25.024: INFO: node status heartbeat is unchanged for 5.000797565s, waiting for 1m20s Nov 13 04:30:26.023: INFO: node status heartbeat is unchanged for 6.000680545s, waiting for 1m20s Nov 13 04:30:27.025: INFO: node status heartbeat is unchanged for 7.001898855s, waiting for 1m20s Nov 13 04:30:28.023: INFO: node status heartbeat is unchanged for 8.000370194s, waiting for 1m20s Nov 13 04:30:29.023: INFO: node status heartbeat is unchanged for 9.00022666s, waiting for 1m20s Nov 13 04:30:30.024: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:30:30.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:30:31.023: INFO: node status heartbeat is unchanged for 999.466267ms, waiting for 1m20s Nov 13 04:30:32.023: INFO: node status heartbeat is unchanged for 1.999655771s, waiting for 1m20s Nov 13 04:30:33.024: INFO: node status heartbeat is unchanged for 3.000323646s, waiting for 1m20s Nov 13 04:30:34.023: INFO: node status heartbeat is unchanged for 3.999231553s, waiting for 1m20s Nov 13 04:30:35.024: INFO: node status heartbeat is unchanged for 4.999703979s, waiting for 1m20s Nov 13 04:30:36.023: INFO: node status heartbeat is unchanged for 5.999172859s, waiting for 1m20s Nov 13 04:30:37.024: INFO: node status heartbeat is unchanged for 7.000094821s, waiting for 1m20s Nov 13 04:30:38.023: INFO: node status heartbeat is unchanged for 7.999536795s, waiting for 1m20s Nov 13 04:30:39.023: INFO: node status heartbeat is unchanged for 8.998914584s, waiting for 1m20s Nov 13 04:30:40.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:30:40.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:30:41.023: INFO: node status heartbeat is unchanged for 999.824481ms, waiting for 1m20s Nov 13 04:30:42.025: INFO: node status heartbeat is unchanged for 2.001585286s, waiting for 1m20s Nov 13 04:30:43.026: INFO: node status heartbeat is unchanged for 3.002478789s, waiting for 1m20s Nov 13 04:30:44.022: INFO: node status heartbeat is unchanged for 3.999022012s, waiting for 1m20s Nov 13 04:30:45.026: INFO: node status heartbeat is unchanged for 5.00294277s, waiting for 1m20s Nov 13 04:30:46.024: INFO: node status heartbeat is unchanged for 6.000378351s, waiting for 1m20s Nov 13 04:30:47.025: INFO: node status heartbeat is unchanged for 7.00189325s, waiting for 1m20s Nov 13 04:30:48.026: INFO: node status heartbeat is unchanged for 8.002663813s, waiting for 1m20s Nov 13 04:30:49.023: INFO: node status heartbeat is unchanged for 8.99959785s, waiting for 1m20s Nov 13 04:30:50.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:30:50.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:49 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:30:51.025: INFO: node status heartbeat is unchanged for 1.000732386s, waiting for 1m20s Nov 13 04:30:52.026: INFO: node status heartbeat is unchanged for 2.00128443s, waiting for 1m20s Nov 13 04:30:53.025: INFO: node status heartbeat is unchanged for 3.000262894s, waiting for 1m20s Nov 13 04:30:54.023: INFO: node status heartbeat is unchanged for 3.998610561s, waiting for 1m20s Nov 13 04:30:55.027: INFO: node status heartbeat is unchanged for 5.001881246s, waiting for 1m20s Nov 13 04:30:56.025: INFO: node status heartbeat is unchanged for 6.000315588s, waiting for 1m20s Nov 13 04:30:57.026: INFO: node status heartbeat is unchanged for 7.000991782s, waiting for 1m20s Nov 13 04:30:58.025: INFO: node status heartbeat is unchanged for 7.999856464s, waiting for 1m20s Nov 13 04:30:59.023: INFO: node status heartbeat is unchanged for 8.998817903s, waiting for 1m20s Nov 13 04:31:00.027: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:31:00.032: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:49 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:59 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:31:01.024: INFO: node status heartbeat is unchanged for 997.101932ms, waiting for 1m20s Nov 13 04:31:02.025: INFO: node status heartbeat is unchanged for 1.997866476s, waiting for 1m20s Nov 13 04:31:03.025: INFO: node status heartbeat is unchanged for 2.998324533s, waiting for 1m20s Nov 13 04:31:04.023: INFO: node status heartbeat is unchanged for 3.995820209s, waiting for 1m20s Nov 13 04:31:05.025: INFO: node status heartbeat is unchanged for 4.997778994s, waiting for 1m20s Nov 13 04:31:06.024: INFO: node status heartbeat is unchanged for 5.996692523s, waiting for 1m20s Nov 13 04:31:07.026: INFO: node status heartbeat is unchanged for 6.99864781s, waiting for 1m20s Nov 13 04:31:08.026: INFO: node status heartbeat is unchanged for 7.998841739s, waiting for 1m20s Nov 13 04:31:09.022: INFO: node status heartbeat is unchanged for 8.994963353s, waiting for 1m20s Nov 13 04:31:10.026: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:31:10.030: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:30:59 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:09 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:31:11.024: INFO: node status heartbeat is unchanged for 997.992893ms, waiting for 1m20s Nov 13 04:31:12.025: INFO: node status heartbeat is unchanged for 1.999132715s, waiting for 1m20s Nov 13 04:31:13.023: INFO: node status heartbeat is unchanged for 2.997486993s, waiting for 1m20s Nov 13 04:31:14.022: INFO: node status heartbeat is unchanged for 3.996612605s, waiting for 1m20s Nov 13 04:31:15.026: INFO: node status heartbeat is unchanged for 5.000436993s, waiting for 1m20s Nov 13 04:31:16.024: INFO: node status heartbeat is unchanged for 5.998410997s, waiting for 1m20s Nov 13 04:31:17.025: INFO: node status heartbeat is unchanged for 6.999287659s, waiting for 1m20s Nov 13 04:31:18.025: INFO: node status heartbeat is unchanged for 7.999059555s, waiting for 1m20s Nov 13 04:31:19.022: INFO: node status heartbeat is unchanged for 8.996305323s, waiting for 1m20s Nov 13 04:31:20.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:31:20.030: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:09 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:19 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:31:21.023: INFO: node status heartbeat is unchanged for 997.880144ms, waiting for 1m20s Nov 13 04:31:22.024: INFO: node status heartbeat is unchanged for 1.999185546s, waiting for 1m20s Nov 13 04:31:23.023: INFO: node status heartbeat is unchanged for 2.9983997s, waiting for 1m20s Nov 13 04:31:24.024: INFO: node status heartbeat is unchanged for 3.999178683s, waiting for 1m20s Nov 13 04:31:25.023: INFO: node status heartbeat is unchanged for 4.998392687s, waiting for 1m20s Nov 13 04:31:26.022: INFO: node status heartbeat is unchanged for 5.997246769s, waiting for 1m20s Nov 13 04:31:27.024: INFO: node status heartbeat is unchanged for 6.998597218s, waiting for 1m20s Nov 13 04:31:28.022: INFO: node status heartbeat is unchanged for 7.997448321s, waiting for 1m20s Nov 13 04:31:29.023: INFO: node status heartbeat is unchanged for 8.99771329s, waiting for 1m20s Nov 13 04:31:30.022: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:31:30.027: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:19 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:29 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:31:31.023: INFO: node status heartbeat is unchanged for 1.000931646s, waiting for 1m20s Nov 13 04:31:32.024: INFO: node status heartbeat is unchanged for 2.001925656s, waiting for 1m20s Nov 13 04:31:33.023: INFO: node status heartbeat is unchanged for 3.000790474s, waiting for 1m20s Nov 13 04:31:34.024: INFO: node status heartbeat is unchanged for 4.002222586s, waiting for 1m20s Nov 13 04:31:35.023: INFO: node status heartbeat is unchanged for 5.001151114s, waiting for 1m20s Nov 13 04:31:36.023: INFO: node status heartbeat is unchanged for 6.000917581s, waiting for 1m20s Nov 13 04:31:37.023: INFO: node status heartbeat is unchanged for 7.000358133s, waiting for 1m20s Nov 13 04:31:38.023: INFO: node status heartbeat is unchanged for 8.000794542s, waiting for 1m20s Nov 13 04:31:39.025: INFO: node status heartbeat is unchanged for 9.002477722s, waiting for 1m20s Nov 13 04:31:40.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:31:40.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:29 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:39 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:31:41.024: INFO: node status heartbeat is unchanged for 1.000582955s, waiting for 1m20s Nov 13 04:31:42.026: INFO: node status heartbeat is unchanged for 2.002838142s, waiting for 1m20s Nov 13 04:31:43.024: INFO: node status heartbeat is unchanged for 3.00148936s, waiting for 1m20s Nov 13 04:31:44.024: INFO: node status heartbeat is unchanged for 4.000730372s, waiting for 1m20s Nov 13 04:31:45.023: INFO: node status heartbeat is unchanged for 5.000440626s, waiting for 1m20s Nov 13 04:31:46.025: INFO: node status heartbeat is unchanged for 6.00165145s, waiting for 1m20s Nov 13 04:31:47.025: INFO: node status heartbeat is unchanged for 7.001822583s, waiting for 1m20s Nov 13 04:31:48.026: INFO: node status heartbeat is unchanged for 8.003319383s, waiting for 1m20s Nov 13 04:31:49.024: INFO: node status heartbeat is unchanged for 9.001013612s, waiting for 1m20s Nov 13 04:31:50.024: INFO: node status heartbeat is unchanged for 10.001488438s, waiting for 1m20s Nov 13 04:31:51.025: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Nov 13 04:31:51.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:39 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:31:52.024: INFO: node status heartbeat is unchanged for 999.198836ms, waiting for 1m20s Nov 13 04:31:53.025: INFO: node status heartbeat is unchanged for 2.0000823s, waiting for 1m20s Nov 13 04:31:54.024: INFO: node status heartbeat is unchanged for 2.998942596s, waiting for 1m20s Nov 13 04:31:55.024: INFO: node status heartbeat is unchanged for 3.998883266s, waiting for 1m20s Nov 13 04:31:56.025: INFO: node status heartbeat is unchanged for 5.000624409s, waiting for 1m20s Nov 13 04:31:57.026: INFO: node status heartbeat is unchanged for 6.000987434s, waiting for 1m20s Nov 13 04:31:58.025: INFO: node status heartbeat is unchanged for 7.000702236s, waiting for 1m20s Nov 13 04:31:59.023: INFO: node status heartbeat is unchanged for 7.998250212s, waiting for 1m20s Nov 13 04:32:00.025: INFO: node status heartbeat is unchanged for 9.000797116s, waiting for 1m20s Nov 13 04:32:01.024: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:32:01.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:31:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:32:02.024: INFO: node status heartbeat is unchanged for 1.000800411s, waiting for 1m20s Nov 13 04:32:03.025: INFO: node status heartbeat is unchanged for 2.001479171s, waiting for 1m20s Nov 13 04:32:04.025: INFO: node status heartbeat is unchanged for 3.00108249s, waiting for 1m20s Nov 13 04:32:05.023: INFO: node status heartbeat is unchanged for 3.998838342s, waiting for 1m20s Nov 13 04:32:06.025: INFO: node status heartbeat is unchanged for 5.00182124s, waiting for 1m20s Nov 13 04:32:07.025: INFO: node status heartbeat is unchanged for 6.001474647s, waiting for 1m20s Nov 13 04:32:08.025: INFO: node status heartbeat is unchanged for 7.001591575s, waiting for 1m20s Nov 13 04:32:09.025: INFO: node status heartbeat is unchanged for 8.000913354s, waiting for 1m20s Nov 13 04:32:10.025: INFO: node status heartbeat is unchanged for 9.001088435s, waiting for 1m20s Nov 13 04:32:11.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:32:11.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:32:12.027: INFO: node status heartbeat is unchanged for 1.0034842s, waiting for 1m20s Nov 13 04:32:13.025: INFO: node status heartbeat is unchanged for 2.001392782s, waiting for 1m20s Nov 13 04:32:14.022: INFO: node status heartbeat is unchanged for 2.998941024s, waiting for 1m20s Nov 13 04:32:15.026: INFO: node status heartbeat is unchanged for 4.002434791s, waiting for 1m20s Nov 13 04:32:16.024: INFO: node status heartbeat is unchanged for 5.000996003s, waiting for 1m20s Nov 13 04:32:17.024: INFO: node status heartbeat is unchanged for 6.000544855s, waiting for 1m20s Nov 13 04:32:18.023: INFO: node status heartbeat is unchanged for 6.99951367s, waiting for 1m20s Nov 13 04:32:19.023: INFO: node status heartbeat is unchanged for 7.999877852s, waiting for 1m20s Nov 13 04:32:20.023: INFO: node status heartbeat is unchanged for 8.999417014s, waiting for 1m20s Nov 13 04:32:21.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:32:21.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:32:22.024: INFO: node status heartbeat is unchanged for 999.05731ms, waiting for 1m20s Nov 13 04:32:23.023: INFO: node status heartbeat is unchanged for 1.99826478s, waiting for 1m20s Nov 13 04:32:24.023: INFO: node status heartbeat is unchanged for 2.99816755s, waiting for 1m20s Nov 13 04:32:25.026: INFO: node status heartbeat is unchanged for 4.000882026s, waiting for 1m20s Nov 13 04:32:26.026: INFO: node status heartbeat is unchanged for 5.001285762s, waiting for 1m20s Nov 13 04:32:27.023: INFO: node status heartbeat is unchanged for 5.998706996s, waiting for 1m20s Nov 13 04:32:28.026: INFO: node status heartbeat is unchanged for 7.001636568s, waiting for 1m20s Nov 13 04:32:29.024: INFO: node status heartbeat is unchanged for 7.999050594s, waiting for 1m20s Nov 13 04:32:30.023: INFO: node status heartbeat is unchanged for 8.998276449s, waiting for 1m20s Nov 13 04:32:31.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:32:31.030: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:32:32.027: INFO: node status heartbeat is unchanged for 1.001975456s, waiting for 1m20s Nov 13 04:32:33.024: INFO: node status heartbeat is unchanged for 1.998898312s, waiting for 1m20s Nov 13 04:32:34.022: INFO: node status heartbeat is unchanged for 2.996889979s, waiting for 1m20s Nov 13 04:32:35.025: INFO: node status heartbeat is unchanged for 3.99952678s, waiting for 1m20s Nov 13 04:32:36.024: INFO: node status heartbeat is unchanged for 4.998969698s, waiting for 1m20s Nov 13 04:32:37.028: INFO: node status heartbeat is unchanged for 6.002462729s, waiting for 1m20s Nov 13 04:32:38.023: INFO: node status heartbeat is unchanged for 6.997911594s, waiting for 1m20s Nov 13 04:32:39.025: INFO: node status heartbeat is unchanged for 7.999195695s, waiting for 1m20s Nov 13 04:32:40.025: INFO: node status heartbeat is unchanged for 8.999364948s, waiting for 1m20s Nov 13 04:32:41.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:32:41.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:32:42.024: INFO: node status heartbeat is unchanged for 999.055507ms, waiting for 1m20s Nov 13 04:32:43.024: INFO: node status heartbeat is unchanged for 1.99922856s, waiting for 1m20s Nov 13 04:32:44.024: INFO: node status heartbeat is unchanged for 2.998893248s, waiting for 1m20s Nov 13 04:32:45.023: INFO: node status heartbeat is unchanged for 3.998207296s, waiting for 1m20s Nov 13 04:32:46.023: INFO: node status heartbeat is unchanged for 4.998444749s, waiting for 1m20s Nov 13 04:32:47.024: INFO: node status heartbeat is unchanged for 5.999301593s, waiting for 1m20s Nov 13 04:32:48.026: INFO: node status heartbeat is unchanged for 7.001015129s, waiting for 1m20s Nov 13 04:32:49.023: INFO: node status heartbeat is unchanged for 7.997940373s, waiting for 1m20s Nov 13 04:32:50.025: INFO: node status heartbeat is unchanged for 9.000096958s, waiting for 1m20s Nov 13 04:32:51.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:32:51.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:40 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:50 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:32:52.026: INFO: node status heartbeat is unchanged for 1.001627952s, waiting for 1m20s Nov 13 04:32:53.026: INFO: node status heartbeat is unchanged for 2.001298252s, waiting for 1m20s Nov 13 04:32:54.023: INFO: node status heartbeat is unchanged for 2.998853619s, waiting for 1m20s Nov 13 04:32:55.027: INFO: node status heartbeat is unchanged for 4.002313043s, waiting for 1m20s Nov 13 04:32:56.025: INFO: node status heartbeat is unchanged for 5.000384532s, waiting for 1m20s Nov 13 04:32:57.027: INFO: node status heartbeat is unchanged for 6.002035638s, waiting for 1m20s Nov 13 04:32:58.024: INFO: node status heartbeat is unchanged for 6.999910988s, waiting for 1m20s Nov 13 04:32:59.024: INFO: node status heartbeat is unchanged for 7.999060439s, waiting for 1m20s Nov 13 04:33:00.023: INFO: node status heartbeat is unchanged for 8.999006288s, waiting for 1m20s Nov 13 04:33:01.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:33:01.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:32:50 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:00 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:33:02.024: INFO: node status heartbeat is unchanged for 1.000825725s, waiting for 1m20s Nov 13 04:33:03.023: INFO: node status heartbeat is unchanged for 1.99931679s, waiting for 1m20s Nov 13 04:33:04.023: INFO: node status heartbeat is unchanged for 2.999877381s, waiting for 1m20s Nov 13 04:33:05.023: INFO: node status heartbeat is unchanged for 3.999760806s, waiting for 1m20s Nov 13 04:33:06.025: INFO: node status heartbeat is unchanged for 5.001475218s, waiting for 1m20s Nov 13 04:33:07.023: INFO: node status heartbeat is unchanged for 5.999897562s, waiting for 1m20s Nov 13 04:33:08.024: INFO: node status heartbeat is unchanged for 7.001117388s, waiting for 1m20s Nov 13 04:33:09.024: INFO: node status heartbeat is unchanged for 8.000603847s, waiting for 1m20s Nov 13 04:33:10.022: INFO: node status heartbeat is unchanged for 8.999126051s, waiting for 1m20s Nov 13 04:33:11.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:33:11.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:00 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:10 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:33:12.024: INFO: node status heartbeat is unchanged for 1.001540452s, waiting for 1m20s Nov 13 04:33:13.023: INFO: node status heartbeat is unchanged for 2.000446256s, waiting for 1m20s Nov 13 04:33:14.023: INFO: node status heartbeat is unchanged for 3.000308774s, waiting for 1m20s Nov 13 04:33:15.023: INFO: node status heartbeat is unchanged for 3.999979501s, waiting for 1m20s Nov 13 04:33:16.023: INFO: node status heartbeat is unchanged for 5.00032777s, waiting for 1m20s Nov 13 04:33:17.023: INFO: node status heartbeat is unchanged for 6.000289682s, waiting for 1m20s Nov 13 04:33:18.024: INFO: node status heartbeat is unchanged for 7.000660464s, waiting for 1m20s Nov 13 04:33:19.023: INFO: node status heartbeat is unchanged for 8.000091802s, waiting for 1m20s Nov 13 04:33:20.023: INFO: node status heartbeat is unchanged for 9.000534597s, waiting for 1m20s Nov 13 04:33:21.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:33:21.027: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:10 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:20 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:33:22.023: INFO: node status heartbeat is unchanged for 1.00048587s, waiting for 1m20s Nov 13 04:33:23.024: INFO: node status heartbeat is unchanged for 2.001176105s, waiting for 1m20s Nov 13 04:33:24.022: INFO: node status heartbeat is unchanged for 2.999877431s, waiting for 1m20s Nov 13 04:33:25.025: INFO: node status heartbeat is unchanged for 4.002136471s, waiting for 1m20s Nov 13 04:33:26.024: INFO: node status heartbeat is unchanged for 5.001010095s, waiting for 1m20s Nov 13 04:33:27.023: INFO: node status heartbeat is unchanged for 6.000449285s, waiting for 1m20s Nov 13 04:33:28.024: INFO: node status heartbeat is unchanged for 7.001259064s, waiting for 1m20s Nov 13 04:33:29.023: INFO: node status heartbeat is unchanged for 8.000606003s, waiting for 1m20s Nov 13 04:33:30.023: INFO: node status heartbeat is unchanged for 9.00095508s, waiting for 1m20s Nov 13 04:33:31.023: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:33:31.028: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:30 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:33:32.024: INFO: node status heartbeat is unchanged for 1.000548543s, waiting for 1m20s Nov 13 04:33:33.023: INFO: node status heartbeat is unchanged for 1.999594532s, waiting for 1m20s Nov 13 04:33:34.023: INFO: node status heartbeat is unchanged for 2.999926957s, waiting for 1m20s Nov 13 04:33:35.026: INFO: node status heartbeat is unchanged for 4.00243825s, waiting for 1m20s Nov 13 04:33:36.025: INFO: node status heartbeat is unchanged for 5.001481477s, waiting for 1m20s Nov 13 04:33:37.026: INFO: node status heartbeat is unchanged for 6.002849551s, waiting for 1m20s Nov 13 04:33:38.025: INFO: node status heartbeat is unchanged for 7.001703487s, waiting for 1m20s Nov 13 04:33:39.024: INFO: node status heartbeat is unchanged for 8.001039583s, waiting for 1m20s Nov 13 04:33:40.026: INFO: node status heartbeat is unchanged for 9.002923815s, waiting for 1m20s Nov 13 04:33:41.025: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Nov 13 04:33:41.029: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, LastTransitionTime: {Time: s"2021-11-12 21:11:27 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:30 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-11-13 04:33:40 +0000 UTC"},    LastTransitionTime: {Time: s"2021-11-12 21:07:36 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-11-12 21:08:47 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Nov 13 04:33:42.025: INFO: node status heartbeat is unchanged for 1.000289759s, waiting for 1m20s Nov 13 04:33:43.024: INFO: node status heartbeat is unchanged for 1.999258516s, waiting for 1m20s Nov 13 04:33:44.022: INFO: node status heartbeat is unchanged for 2.997796268s, waiting for 1m20s Nov 13 04:33:45.024: INFO: node status heartbeat is unchanged for 3.999671262s, waiting for 1m20s Nov 13 04:33:46.025: INFO: node status heartbeat is unchanged for 5.00020159s, waiting for 1m20s Nov 13 04:33:47.023: INFO: node status heartbeat is unchanged for 5.998268485s, waiting for 1m20s Nov 13 04:33:47.026: INFO: node status heartbeat is unchanged for 6.001779715s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:33:47.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-4213" for this suite. • [SLOW TEST:300.055 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":4,"skipped":500,"failed":0} Nov 13 04:33:47.047: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Nov 13 04:29:04.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Nov 13 04:29:04.629: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:29:06.631: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:29:08.633: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Nov 13 04:29:10.631: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Nov 13 04:40:55.096: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-11-13 04:35:44 +0000 UTC restartedAt=2021-11-13 04:40:54 +0000 UTC (5m10s) Nov 13 04:46:06.552: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-11-13 04:40:59 +0000 UTC restartedAt=2021-11-13 04:46:05 +0000 UTC (5m6s) Nov 13 04:51:16.052: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-11-13 04:46:10 +0000 UTC restartedAt=2021-11-13 04:51:15 +0000 UTC (5m5s) STEP: getting restart delay after a capped delay Nov 13 04:56:29.507: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-11-13 04:51:20 +0000 UTC restartedAt=2021-11-13 04:56:28 +0000 UTC (5m8s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Nov 13 04:56:29.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3622" for this suite. • [SLOW TEST:1644.923 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":3,"skipped":318,"failed":0} Nov 13 04:56:29.521: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":291,"failed":0} Nov 13 04:30:54.307: INFO: Running AfterSuite actions on all nodes Nov 13 04:56:29.574: INFO: Running AfterSuite actions on node 1 Nov 13 04:56:29.574: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1724.236 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 28m45.765027855s Test Suite Failed