Running Suite: Kubernetes e2e suite =================================== Random Seed: 1634963995 - Will randomize all specs Will run 5770 specs Running in parallel across 10 nodes Oct 23 04:39:57.411: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.414: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 23 04:39:57.442: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 23 04:39:57.513: INFO: The status of Pod cmk-init-discover-node1-c599w is Succeeded, skipping waiting Oct 23 04:39:57.513: INFO: The status of Pod cmk-init-discover-node2-2btnq is Succeeded, skipping waiting Oct 23 04:39:57.513: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 23 04:39:57.513: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. Oct 23 04:39:57.513: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 23 04:39:57.525: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) Oct 23 04:39:57.525: INFO: e2e test version: v1.21.5 Oct 23 04:39:57.526: INFO: kube-apiserver version: v1.21.1 Oct 23 04:39:57.526: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.532: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ Oct 23 04:39:57.528: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.551: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 04:39:57.551: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.572: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 04:39:57.551: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.572: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 04:39:57.566: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.589: INFO: Cluster IP family: ipv4 Oct 23 04:39:57.566: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.589: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 04:39:57.569: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.591: INFO: Cluster IP family: ipv4 S ------------------------------ Oct 23 04:39:57.568: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.591: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ Oct 23 04:39:57.576: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.594: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ Oct 23 04:39:57.576: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:39:57.599: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1023 04:39:57.751193 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.751: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.753: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:05.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9957" for this suite. • [SLOW TEST:8.087 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":27,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1023 04:39:57.901389 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.901: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.903: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Oct 23 04:39:57.916: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e" in namespace "security-context-test-1579" to be "Succeeded or Failed" Oct 23 04:39:57.918: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.731442ms Oct 23 04:39:59.922: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006222307s Oct 23 04:40:01.927: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01046679s Oct 23 04:40:03.933: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016471705s Oct 23 04:40:05.936: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.01974529s Oct 23 04:40:07.940: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024372843s Oct 23 04:40:09.945: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.028950753s Oct 23 04:40:09.945: INFO: Pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e" satisfied condition "Succeeded or Failed" Oct 23 04:40:10.324: INFO: Got logs for pod "busybox-privileged-true-735ddac6-46ac-406c-8eb8-0e04f04b752e": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:10.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1579" for this suite. • [SLOW TEST:12.452 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W1023 04:39:57.906398 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.906: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.908: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Oct 23 04:39:57.923: INFO: Waiting up to 5m0s for pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c" in namespace "downward-api-8739" to be "Succeeded or Failed" Oct 23 04:39:57.925: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224423ms Oct 23 04:39:59.930: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006536688s Oct 23 04:40:01.934: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010273003s Oct 23 04:40:03.938: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014679277s Oct 23 04:40:05.942: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018744789s Oct 23 04:40:07.945: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.022086369s Oct 23 04:40:09.949: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.025625704s STEP: Saw pod success Oct 23 04:40:09.949: INFO: Pod "downward-api-06697d61-2031-4d3a-bbca-429d1997716c" satisfied condition "Succeeded or Failed" Oct 23 04:40:09.951: INFO: Trying to get logs from node node1 pod downward-api-06697d61-2031-4d3a-bbca-429d1997716c container dapi-container: STEP: delete the pod Oct 23 04:40:10.531: INFO: Waiting for pod downward-api-06697d61-2031-4d3a-bbca-429d1997716c to disappear Oct 23 04:40:10.533: INFO: Pod downward-api-06697d61-2031-4d3a-bbca-429d1997716c no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:10.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8739" for this suite. • [SLOW TEST:12.659 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":83,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1023 04:39:57.851891 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.852: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.853: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 04:39:57.867: INFO: Waiting up to 5m0s for pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66" in namespace "security-context-1609" to be "Succeeded or Failed" Oct 23 04:39:57.869: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.311819ms Oct 23 04:39:59.873: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006359263s Oct 23 04:40:01.876: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009636062s Oct 23 04:40:03.881: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01387407s Oct 23 04:40:05.883: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016792348s Oct 23 04:40:07.887: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020591111s Oct 23 04:40:09.892: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025805893s Oct 23 04:40:11.898: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031768044s Oct 23 04:40:13.904: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.036998136s STEP: Saw pod success Oct 23 04:40:13.904: INFO: Pod "security-context-30176157-12ca-4546-b925-8d90cda6ea66" satisfied condition "Succeeded or Failed" Oct 23 04:40:13.907: INFO: Trying to get logs from node node2 pod security-context-30176157-12ca-4546-b925-8d90cda6ea66 container test-container: STEP: delete the pod Oct 23 04:40:13.925: INFO: Waiting for pod security-context-30176157-12ca-4546-b925-8d90cda6ea66 to disappear Oct 23 04:40:13.927: INFO: Pod security-context-30176157-12ca-4546-b925-8d90cda6ea66 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:13.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1609" for this suite. • [SLOW TEST:16.105 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":1,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:14.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Oct 23 04:40:14.070: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:14.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-7988" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod W1023 04:39:57.663116 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.663: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.667: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Oct 23 04:39:57.684: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:39:59.688: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:01.690: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:03.692: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:05.690: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:07.688: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:09.689: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:11.688: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:13.689: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Oct 23 04:40:13.692: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5374 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:40:13.692: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:40:14.335: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-5374 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:40:14.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Oct 23 04:40:14.622: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-5374 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:40:14.622: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:15.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-5374" for this suite. • [SLOW TEST:17.683 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:15.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 23 04:40:15.433: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:15.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3666" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.029 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W1023 04:39:57.796907 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.797: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.798: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Oct 23 04:39:57.812: INFO: Waiting up to 5m0s for pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531" in namespace "security-context-test-5547" to be "Succeeded or Failed" Oct 23 04:39:57.815: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.73004ms Oct 23 04:39:59.818: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006096822s Oct 23 04:40:01.822: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009909212s Oct 23 04:40:03.828: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015759548s Oct 23 04:40:05.832: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019638034s Oct 23 04:40:07.835: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023407295s Oct 23 04:40:09.840: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 12.027774309s Oct 23 04:40:11.843: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 14.031206904s Oct 23 04:40:13.848: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036232482s Oct 23 04:40:15.852: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 18.040045541s Oct 23 04:40:17.856: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Pending", Reason="", readiness=false. Elapsed: 20.043797455s Oct 23 04:40:19.860: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.048286087s Oct 23 04:40:19.860: INFO: Pod "busybox-user-0-e1155cab-52f3-4d94-acd9-ca0dbf569531" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:19.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5547" for this suite. • [SLOW TEST:22.094 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":62,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W1023 04:39:57.966016 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.966: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.967: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 04:39:57.982: INFO: Waiting up to 5m0s for pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458" in namespace "security-context-2702" to be "Succeeded or Failed" Oct 23 04:39:57.984: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 1.844473ms Oct 23 04:39:59.988: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005221694s Oct 23 04:40:01.992: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009505805s Oct 23 04:40:03.996: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013504936s Oct 23 04:40:05.999: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 8.016973056s Oct 23 04:40:08.002: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 10.020006139s Oct 23 04:40:10.006: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024004326s Oct 23 04:40:12.012: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 14.029626357s Oct 23 04:40:14.019: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 16.036163773s Oct 23 04:40:16.022: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 18.039597018s Oct 23 04:40:18.026: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 20.044081987s Oct 23 04:40:20.030: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Pending", Reason="", readiness=false. Elapsed: 22.048041097s Oct 23 04:40:22.036: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053350774s STEP: Saw pod success Oct 23 04:40:22.036: INFO: Pod "security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458" satisfied condition "Succeeded or Failed" Oct 23 04:40:22.038: INFO: Trying to get logs from node node2 pod security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458 container test-container: STEP: delete the pod Oct 23 04:40:22.062: INFO: Waiting for pod security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458 to disappear Oct 23 04:40:22.065: INFO: Pod security-context-70f0c9fc-5229-40c6-b593-2eb446fbe458 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:22.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2702" for this suite. • [SLOW TEST:24.128 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:58.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W1023 04:39:58.200201 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:58.200: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:58.202: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:22.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9247" for this suite. • [SLOW TEST:24.160 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":1,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:10.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:22.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4195" for this suite. • [SLOW TEST:12.104 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":2,"skipped":198,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:22.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Oct 23 04:40:22.689: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:22.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-8367" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:06.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Oct 23 04:40:06.037: INFO: Waiting up to 5m0s for pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3" in namespace "security-context-4524" to be "Succeeded or Failed" Oct 23 04:40:06.039: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688979ms Oct 23 04:40:08.043: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006176688s Oct 23 04:40:10.049: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012445657s Oct 23 04:40:12.053: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016052251s Oct 23 04:40:14.057: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019811148s Oct 23 04:40:16.059: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02210035s Oct 23 04:40:18.062: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025174671s Oct 23 04:40:20.070: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032750773s Oct 23 04:40:22.074: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.037457739s Oct 23 04:40:24.080: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.043011986s STEP: Saw pod success Oct 23 04:40:24.080: INFO: Pod "security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3" satisfied condition "Succeeded or Failed" Oct 23 04:40:24.083: INFO: Trying to get logs from node node2 pod security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3 container test-container: STEP: delete the pod Oct 23 04:40:24.096: INFO: Waiting for pod security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3 to disappear Oct 23 04:40:24.098: INFO: Pod security-context-b8ae91ea-a955-481c-a502-dfb2810d80d3 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:24.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4524" for this suite. • [SLOW TEST:18.105 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":2,"skipped":118,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:14.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Oct 23 04:40:14.274: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2647" to be "Succeeded or Failed" Oct 23 04:40:14.276: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.483221ms Oct 23 04:40:16.279: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005812035s Oct 23 04:40:18.285: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011417769s Oct 23 04:40:20.289: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015706844s Oct 23 04:40:22.293: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019787358s Oct 23 04:40:24.298: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.024795816s Oct 23 04:40:26.304: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030805424s Oct 23 04:40:26.304: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:26.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2647" for this suite. • [SLOW TEST:12.093 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":197,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:26.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Oct 23 04:40:26.395: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:26.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-7308" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:16.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Oct 23 04:40:16.331: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7" in namespace "security-context-test-7735" to be "Succeeded or Failed" Oct 23 04:40:16.335: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083912ms Oct 23 04:40:18.339: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007479977s Oct 23 04:40:20.343: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011522411s Oct 23 04:40:22.346: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015205045s Oct 23 04:40:24.352: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020565309s Oct 23 04:40:26.355: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023647574s Oct 23 04:40:28.359: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.027471913s Oct 23 04:40:28.359: INFO: Pod "alpine-nnp-true-8a96c6ea-1cfe-4123-a5ac-4f20db5542c7" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:28.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7735" for this suite. • [SLOW TEST:12.216 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":507,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:28.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:28.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5248" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":3,"skipped":529,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:22.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 23 04:40:22.721: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Oct 23 04:40:22.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2263 create -f -' Oct 23 04:40:23.168: INFO: stderr: "" Oct 23 04:40:23.168: INFO: stdout: "secret/test-secret created\n" Oct 23 04:40:23.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2263 create -f -' Oct 23 04:40:23.499: INFO: stderr: "" Oct 23 04:40:23.499: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Oct 23 04:40:29.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2263 logs secret-test-pod test-container' Oct 23 04:40:29.689: INFO: stderr: "" Oct 23 04:40:29.689: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:29.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2263" for this suite. • [SLOW TEST:7.004 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":3,"skipped":200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:22.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Oct 23 04:40:22.453: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5403" to be "Succeeded or Failed" Oct 23 04:40:22.456: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313119ms Oct 23 04:40:24.459: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00609582s Oct 23 04:40:26.463: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00958944s Oct 23 04:40:28.467: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013998584s Oct 23 04:40:30.471: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018038955s Oct 23 04:40:30.471: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:30.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5403" for this suite. • [SLOW TEST:8.070 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:28.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:34.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2611" for this suite. • [SLOW TEST:6.047 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:26.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-ec01886c-aacf-49fa-84d7-1ef7c400049d in namespace container-probe-8863 Oct 23 04:40:34.857: INFO: Started pod startup-override-ec01886c-aacf-49fa-84d7-1ef7c400049d in namespace container-probe-8863 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:40:34.859: INFO: Initial restart count of pod startup-override-ec01886c-aacf-49fa-84d7-1ef7c400049d is 0 Oct 23 04:40:40.879: INFO: Restart count of pod container-probe-8863/startup-override-ec01886c-aacf-49fa-84d7-1ef7c400049d is now 1 (6.019811408s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:40.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8863" for this suite. • [SLOW TEST:14.083 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":3,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:41.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3381" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":4,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:41.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-1805/configmap-test-10c29c2f-8f8a-4434-8bb0-98c4fd560ade STEP: Updating configMap configmap-1805/configmap-test-10c29c2f-8f8a-4434-8bb0-98c4fd560ade STEP: Verifying update of ConfigMap configmap-1805/configmap-test-10c29c2f-8f8a-4434-8bb0-98c4fd560ade [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:41.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1805" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":5,"skipped":524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:22.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Oct 23 04:40:33.801: INFO: start=2021-10-23 04:40:28.762864256 +0000 UTC m=+32.970471515, now=2021-10-23 04:40:33.801461953 +0000 UTC m=+38.009069252, kubelet pod: {"metadata":{"name":"pod-submit-remove-368b92d6-04be-4369-a85f-a68221fbe99e","namespace":"pods-1008","uid":"752ad7ed-53b7-4a82-bdc3-43b43f3b8d68","resourceVersion":"162883","creationTimestamp":"2021-10-23T04:40:22Z","deletionTimestamp":"2021-10-23T04:40:58Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"738903355"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.104\"\n ],\n \"mac\": \"56:06:96:40:dd:c0\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.104\"\n ],\n \"mac\": \"56:06:96:40:dd:c0\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-10-23T04:40:22.752444479Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-23T04:40:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-hjmtp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-hjmtp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:22Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:22Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.104","podIPs":[{"ip":"10.244.4.104"}],"startTime":"2021-10-23T04:40:22Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-10-23T04:40:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://1bca7c4aca34cf7753031e3607367e234eeea7fe394b393b99acbfd392317c5a","started":true}],"qosClass":"BestEffort"}} Oct 23 04:40:38.782: INFO: start=2021-10-23 04:40:28.762864256 +0000 UTC m=+32.970471515, now=2021-10-23 04:40:38.782374452 +0000 UTC m=+42.989981827, kubelet pod: {"metadata":{"name":"pod-submit-remove-368b92d6-04be-4369-a85f-a68221fbe99e","namespace":"pods-1008","uid":"752ad7ed-53b7-4a82-bdc3-43b43f3b8d68","resourceVersion":"162883","creationTimestamp":"2021-10-23T04:40:22Z","deletionTimestamp":"2021-10-23T04:40:58Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"738903355"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.104\"\n ],\n \"mac\": \"56:06:96:40:dd:c0\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.104\"\n ],\n \"mac\": \"56:06:96:40:dd:c0\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2021-10-23T04:40:22.752444479Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-10-23T04:40:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-hjmtp","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-hjmtp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:22Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:27Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:27Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-10-23T04:40:22Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.104","podIPs":[{"ip":"10.244.4.104"}],"startTime":"2021-10-23T04:40:22Z","containerStatuses":[{"name":"agnhost-container","state":{"running":{"startedAt":"2021-10-23T04:40:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://1bca7c4aca34cf7753031e3607367e234eeea7fe394b393b99acbfd392317c5a","started":true}],"qosClass":"BestEffort"}} Oct 23 04:40:43.776: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:43.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1008" for this suite. • [SLOW TEST:21.071 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":2,"skipped":402,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:29.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 04:40:29.901: INFO: Waiting up to 5m0s for pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300" in namespace "security-context-9629" to be "Succeeded or Failed" Oct 23 04:40:29.905: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 3.905075ms Oct 23 04:40:31.909: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007852041s Oct 23 04:40:33.913: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011497871s Oct 23 04:40:35.916: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014938153s Oct 23 04:40:37.921: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019806178s Oct 23 04:40:39.927: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025614431s Oct 23 04:40:41.930: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 12.029118683s Oct 23 04:40:43.935: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Pending", Reason="", readiness=false. Elapsed: 14.033716924s Oct 23 04:40:45.938: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.03687959s STEP: Saw pod success Oct 23 04:40:45.938: INFO: Pod "security-context-9fbf68da-94dc-4329-b8ce-a9281a712300" satisfied condition "Succeeded or Failed" Oct 23 04:40:45.940: INFO: Trying to get logs from node node2 pod security-context-9fbf68da-94dc-4329-b8ce-a9281a712300 container test-container: STEP: delete the pod Oct 23 04:40:45.951: INFO: Waiting for pod security-context-9fbf68da-94dc-4329-b8ce-a9281a712300 to disappear Oct 23 04:40:45.953: INFO: Pod security-context-9fbf68da-94dc-4329-b8ce-a9281a712300 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:45.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9629" for this suite. • [SLOW TEST:16.093 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W1023 04:39:57.917399 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.917: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.919: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-22b79caa-ff85-4f08-b159-544aced86437 in namespace kubelet-6832 I1023 04:39:57.953180 29 runners.go:190] Created replication controller with name: cleanup20-22b79caa-ff85-4f08-b159-544aced86437, namespace: kubelet-6832, replica count: 20 I1023 04:40:08.005492 29 runners.go:190] cleanup20-22b79caa-ff85-4f08-b159-544aced86437 Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 04:40:18.007155 29 runners.go:190] cleanup20-22b79caa-ff85-4f08-b159-544aced86437 Pods: 20 out of 20 created, 12 running, 8 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1023 04:40:28.009049 29 runners.go:190] cleanup20-22b79caa-ff85-4f08-b159-544aced86437 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 23 04:40:29.009: INFO: Checking pods on node node2 via /runningpods endpoint Oct 23 04:40:29.009: INFO: Checking pods on node node1 via /runningpods endpoint Oct 23 04:40:29.045: INFO: Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.340 4917.87 1610.77 "runtime" 0.113 660.61 280.80 "kubelet" 0.113 660.61 280.80 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 0.108 611.91 263.43 "/" 0.579 4084.57 1699.18 "runtime" 0.108 611.91 263.43 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.413 3747.08 1566.60 "runtime" 0.117 559.53 251.25 "kubelet" 0.117 559.53 251.25 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.864 6852.12 2619.55 "runtime" 0.664 2698.79 601.32 "kubelet" 0.664 2698.79 601.32 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.835 4234.64 1149.30 "runtime" 0.975 1630.97 555.75 "kubelet" 0.975 1630.97 555.75 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-22b79caa-ff85-4f08-b159-544aced86437 in namespace kubelet-6832, will wait for the garbage collector to delete the pods Oct 23 04:40:29.102: INFO: Deleting ReplicationController cleanup20-22b79caa-ff85-4f08-b159-544aced86437 took: 4.364727ms Oct 23 04:40:29.703: INFO: Terminating ReplicationController cleanup20-22b79caa-ff85-4f08-b159-544aced86437 pods took: 601.006145ms Oct 23 04:40:46.305: INFO: Checking pods on node node1 via /runningpods endpoint Oct 23 04:40:46.305: INFO: Checking pods on node node2 via /runningpods endpoint Oct 23 04:40:46.323: INFO: Deleting 20 pods on 2 nodes completed in 1.018435526s after the RC was deleted Oct 23 04:40:46.323: INFO: CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.360 0.408 0.411 0.413 0.413 0.413 "runtime" 0.000 0.000 0.110 0.114 0.114 0.114 0.114 "kubelet" 0.000 0.000 0.110 0.114 0.114 0.114 0.114 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.864 1.864 2.019 2.019 2.019 "runtime" 0.000 0.000 0.592 0.664 0.664 0.664 0.664 "kubelet" 0.000 0.000 0.592 0.664 0.664 0.664 0.664 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.820 1.835 1.835 1.835 1.835 "runtime" 0.000 0.000 0.931 0.931 0.931 0.931 0.931 "kubelet" 0.000 0.000 0.931 0.931 0.931 0.931 0.931 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.408 0.408 0.419 0.419 0.419 "runtime" 0.000 0.000 0.113 0.113 0.114 0.114 0.114 "kubelet" 0.000 0.000 0.113 0.113 0.114 0.114 0.114 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.428 0.533 0.544 0.579 0.579 0.579 "runtime" 0.000 0.000 0.099 0.108 0.108 0.108 0.108 "kubelet" 0.000 0.000 0.099 0.108 0.108 0.108 0.108 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:46.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-6832" for this suite. • [SLOW TEST:48.459 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:34.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 04:40:34.868: INFO: Waiting up to 5m0s for pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf" in namespace "security-context-3464" to be "Succeeded or Failed" Oct 23 04:40:34.870: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.950951ms Oct 23 04:40:36.873: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004673775s Oct 23 04:40:38.876: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008248425s Oct 23 04:40:40.880: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011323679s Oct 23 04:40:42.884: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.015351881s Oct 23 04:40:44.886: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018047739s Oct 23 04:40:46.890: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.021362643s STEP: Saw pod success Oct 23 04:40:46.890: INFO: Pod "security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf" satisfied condition "Succeeded or Failed" Oct 23 04:40:46.892: INFO: Trying to get logs from node node2 pod security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf container test-container: STEP: delete the pod Oct 23 04:40:46.913: INFO: Waiting for pod security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf to disappear Oct 23 04:40:46.916: INFO: Pod security-context-5355285a-fd82-4399-8f03-8d8a85eca9bf no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:46.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3464" for this suite. • [SLOW TEST:12.086 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":5,"skipped":619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:19.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-34434d07-bfa8-4f2d-bde2-e7ade955f60f in namespace container-probe-1470 Oct 23 04:40:29.934: INFO: Started pod liveness-override-34434d07-bfa8-4f2d-bde2-e7ade955f60f in namespace container-probe-1470 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:40:29.936: INFO: Initial restart count of pod liveness-override-34434d07-bfa8-4f2d-bde2-e7ade955f60f is 1 Oct 23 04:40:47.974: INFO: Restart count of pod container-probe-1470/liveness-override-34434d07-bfa8-4f2d-bde2-e7ade955f60f is now 2 (18.037433569s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:47.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1470" for this suite. • [SLOW TEST:28.100 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":2,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:43.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Oct 23 04:40:43.900: INFO: Waiting up to 5m0s for pod "pod-always-succeed328d0df5-7c9b-48a7-a3cc-f812b69074ed" in namespace "pods-1216" to be "Succeeded or Failed" Oct 23 04:40:43.902: INFO: Pod "pod-always-succeed328d0df5-7c9b-48a7-a3cc-f812b69074ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.346682ms Oct 23 04:40:45.906: INFO: Pod "pod-always-succeed328d0df5-7c9b-48a7-a3cc-f812b69074ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005742087s Oct 23 04:40:47.910: INFO: Pod "pod-always-succeed328d0df5-7c9b-48a7-a3cc-f812b69074ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009758881s Oct 23 04:40:49.916: INFO: Pod "pod-always-succeed328d0df5-7c9b-48a7-a3cc-f812b69074ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015712658s STEP: Saw pod success Oct 23 04:40:49.916: INFO: Pod "pod-always-succeed328d0df5-7c9b-48a7-a3cc-f812b69074ed" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:51.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1216" for this suite. • [SLOW TEST:8.073 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":438,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:51.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:51.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6980" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":4,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:46.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:52.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8402" for this suite. • [SLOW TEST:6.042 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":5,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:47.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Oct 23 04:40:47.215: INFO: Waiting up to 5m0s for pod "security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1" in namespace "security-context-4379" to be "Succeeded or Failed" Oct 23 04:40:47.218: INFO: Pod "security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347886ms Oct 23 04:40:49.221: INFO: Pod "security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005280405s Oct 23 04:40:51.224: INFO: Pod "security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008825867s Oct 23 04:40:53.227: INFO: Pod "security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012027851s STEP: Saw pod success Oct 23 04:40:53.227: INFO: Pod "security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1" satisfied condition "Succeeded or Failed" Oct 23 04:40:53.230: INFO: Trying to get logs from node node2 pod security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1 container test-container: STEP: delete the pod Oct 23 04:40:53.252: INFO: Waiting for pod security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1 to disappear Oct 23 04:40:53.254: INFO: Pod security-context-4f9a7ca1-d5c9-4799-ad15-15ab1f411fe1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:53.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4379" for this suite. • [SLOW TEST:6.083 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":6,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:24.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Oct 23 04:40:54.203: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:54.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5151" for this suite. • [SLOW TEST:30.084 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":3,"skipped":126,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:46.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Oct 23 04:40:46.697: INFO: Waiting up to 5m0s for pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42" in namespace "security-context-4754" to be "Succeeded or Failed" Oct 23 04:40:46.699: INFO: Pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.780693ms Oct 23 04:40:48.703: INFO: Pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006613677s Oct 23 04:40:50.711: INFO: Pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014086107s Oct 23 04:40:52.715: INFO: Pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018089998s Oct 23 04:40:54.719: INFO: Pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.022613452s STEP: Saw pod success Oct 23 04:40:54.719: INFO: Pod "security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42" satisfied condition "Succeeded or Failed" Oct 23 04:40:54.721: INFO: Trying to get logs from node node2 pod security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42 container test-container: STEP: delete the pod Oct 23 04:40:54.791: INFO: Waiting for pod security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42 to disappear Oct 23 04:40:54.793: INFO: Pod security-context-9bd3769b-a1ec-46d3-8be7-ca68a3dbdc42 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:54.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4754" for this suite. • [SLOW TEST:8.138 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":2,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:52.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 E1023 04:40:56.881061 27 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 220 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x653b640, 0x9beb6a0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc000baef0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0046a0f80, 0xc000baef00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0009bde00, 0xc0046a0f80, 0xc00454d320, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0009bde00, 0xc0046a0f80, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0009bde00, 0xc0046a0f80, 0xc0009bde00, 0xc0046a0f80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0046a0f80, 0x14, 0xc0047026f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc0045438c0, 0xc0024b3368, 0x14, 0xc0047026f0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0011453e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0011453e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc001174280, 0x768f9a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00389e690, 0x0, 0x768f9a0, 0xc000164840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00389e690, 0x768f9a0, 0xc000164840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003d24000, 0xc00389e690, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003d24000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003d24000, 0xc003d1c030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000170280, 0x7f45d17848a8, 0xc003083680, 0x6f05d9d, 0x14, 0xc0037c9500, 0x3, 0x3, 0x7745ab8, 0xc000164840, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x7694a60, 0xc003083680, 0x6f05d9d, 0x14, 0xc002ee20c0, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x7694a60, 0xc003083680, 0x6f05d9d, 0x14, 0xc003380d80, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003083680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003083680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003083680, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-7045". STEP: Found 4 events. Oct 23 04:40:56.885: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38: { } Scheduled: Successfully assigned container-probe-7045/startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38 to node2 Oct 23 04:40:56.885: INFO: At 2021-10-23 04:40:55 +0000 UTC - event for startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" Oct 23 04:40:56.885: INFO: At 2021-10-23 04:40:56 +0000 UTC - event for startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" in 488.603534ms Oct 23 04:40:56.885: INFO: At 2021-10-23 04:40:56 +0000 UTC - event for startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38: {kubelet node2} Created: Created container busybox Oct 23 04:40:56.888: INFO: POD NODE PHASE GRACE CONDITIONS Oct 23 04:40:56.888: INFO: startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38 node2 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 04:40:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 04:40:52 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-10-23 04:40:52 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-10-23 04:40:52 +0000 UTC }] Oct 23 04:40:56.888: INFO: Oct 23 04:40:56.893: INFO: Logging node info for node master1 Oct 23 04:40:56.895: INFO: Node Info: &Node{ObjectMeta:{master1 1b0e9b6c-fa73-4303-880f-3c662903b3ba 163784 0 2021-10-22 21:03:37 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:03:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-10-22 21:03:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2021-10-22 21:06:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kubelet Update v1 2021-10-22 21:11:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234763776 0} {} 196518324Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324599808 0} {} 195629492Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:07 +0000 UTC,LastTransitionTime:2021-10-22 21:09:07 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:54 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:54 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:54 +0000 UTC,LastTransitionTime:2021-10-22 21:03:34 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 04:40:54 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:30ce143f9c9243b59253027a77cdbf77,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:e78651c4-73ca-42e7-8083-bc7c7ebac4b6,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:2012644549052fa07c43b0d19f320c871a25e105d0b23e33645e4f1bcf8fcd97 nginx:1.20.1-alpine],SizeBytes:22650454,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 04:40:56.896: INFO: Logging kubelet events for node master1 Oct 23 04:40:56.899: INFO: Logging pods the kubelet thinks is on node master1 Oct 23 04:40:56.927: INFO: kube-controller-manager-master1 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Container kube-controller-manager ready: true, restart count 1 Oct 23 04:40:56.927: INFO: kube-proxy-fhqkt started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 04:40:56.927: INFO: kube-flannel-8vnf2 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Init container install-cni ready: true, restart count 1 Oct 23 04:40:56.927: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 04:40:56.927: INFO: kube-multus-ds-amd64-vl8qj started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:40:56.927: INFO: coredns-8474476ff8-q8d8x started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Container coredns ready: true, restart count 2 Oct 23 04:40:56.927: INFO: container-registry-65d7c44b96-wtz5j started at 2021-10-22 21:10:37 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:56.927: INFO: Container docker-registry ready: true, restart count 0 Oct 23 04:40:56.927: INFO: Container nginx ready: true, restart count 0 Oct 23 04:40:56.927: INFO: node-exporter-fxb7q started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:56.927: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:40:56.927: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:40:56.927: INFO: kube-apiserver-master1 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 04:40:56.927: INFO: kube-scheduler-master1 started at 2021-10-22 21:22:33 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:56.927: INFO: Container kube-scheduler ready: true, restart count 0 W1023 04:40:56.941468 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 04:40:57.012: INFO: Latency metrics for node master1 Oct 23 04:40:57.012: INFO: Logging node info for node master2 Oct 23 04:40:57.020: INFO: Node Info: &Node{ObjectMeta:{master2 48070097-b11c-473d-9240-f4ee02bd7e2f 163833 0 2021-10-22 21:04:08 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2021-10-22 21:17:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:14 +0000 UTC,LastTransitionTime:2021-10-22 21:09:14 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:04:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c5d510cf1060448cb87a1d02cd1f2972,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:8ec7c43d-60d2-4abb-84a1-5a37f3283118,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 04:40:57.021: INFO: Logging kubelet events for node master2 Oct 23 04:40:57.024: INFO: Logging pods the kubelet thinks is on node master2 Oct 23 04:40:57.034: INFO: kube-apiserver-master2 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 04:40:57.034: INFO: dns-autoscaler-7df78bfcfb-9ss69 started at 2021-10-22 21:06:58 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Container autoscaler ready: true, restart count 1 Oct 23 04:40:57.034: INFO: node-exporter-vljkh started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.034: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:40:57.034: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:40:57.034: INFO: kube-controller-manager-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 04:40:57.034: INFO: kube-scheduler-master2 started at 2021-10-22 21:12:54 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 04:40:57.034: INFO: kube-proxy-2xlf2 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:40:57.034: INFO: kube-flannel-tfkj9 started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Init container install-cni ready: true, restart count 2 Oct 23 04:40:57.034: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 04:40:57.034: INFO: kube-multus-ds-amd64-m8ztc started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.034: INFO: Container kube-multus ready: true, restart count 1 W1023 04:40:57.051913 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 04:40:57.113: INFO: Latency metrics for node master2 Oct 23 04:40:57.114: INFO: Logging node info for node master3 Oct 23 04:40:57.116: INFO: Node Info: &Node{ObjectMeta:{master3 fe22a467-e2de-4b64-9399-d274e6d13231 163764 0 2021-10-22 21:04:18 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-10-22 21:04:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {nfd-master Update v1 2021-10-22 21:14:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2021-10-22 21:14:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234767872 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324603904 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:53 +0000 UTC,LastTransitionTime:2021-10-22 21:04:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 04:40:53 +0000 UTC,LastTransitionTime:2021-10-22 21:09:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:55ed55d7ecb94c5fbcecb32cb3747801,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:7e00baa8-f631-4d7e-baa1-cb915fbb1ea7,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 04:40:57.117: INFO: Logging kubelet events for node master3 Oct 23 04:40:57.119: INFO: Logging pods the kubelet thinks is on node master3 Oct 23 04:40:57.129: INFO: kube-apiserver-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container kube-apiserver ready: true, restart count 0 Oct 23 04:40:57.129: INFO: kube-controller-manager-master3 started at 2021-10-22 21:09:03 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container kube-controller-manager ready: true, restart count 2 Oct 23 04:40:57.129: INFO: kube-proxy-l7st4 started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container kube-proxy ready: true, restart count 1 Oct 23 04:40:57.129: INFO: kube-flannel-rf9mv started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Init container install-cni ready: true, restart count 1 Oct 23 04:40:57.129: INFO: Container kube-flannel ready: true, restart count 1 Oct 23 04:40:57.129: INFO: node-feature-discovery-controller-cff799f9f-dgsfd started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container nfd-controller ready: true, restart count 0 Oct 23 04:40:57.129: INFO: node-exporter-b22mw started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.129: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:40:57.129: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:40:57.129: INFO: kube-scheduler-master3 started at 2021-10-22 21:04:46 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container kube-scheduler ready: true, restart count 2 Oct 23 04:40:57.129: INFO: kube-multus-ds-amd64-tfbmd started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:40:57.129: INFO: coredns-8474476ff8-7wlfp started at 2021-10-22 21:06:56 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.129: INFO: Container coredns ready: true, restart count 2 W1023 04:40:57.141915 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 04:40:57.219: INFO: Latency metrics for node master3 Oct 23 04:40:57.219: INFO: Logging node info for node node1 Oct 23 04:40:57.224: INFO: Node Info: &Node{ObjectMeta:{node1 1c590bf6-8845-4681-8fa1-7acc55183d29 163542 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:17:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 04:39:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 04:39:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269633024 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884632576 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:10 +0000 UTC,LastTransitionTime:2021-10-22 21:09:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:50 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:50 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:50 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 04:40:50 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f11a4b4c58ac4a4eb06ac043edeefa84,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:50e64d70-ffd2-496a-957a-81f1931a6b6e,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003429679,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:d226f3fd5f293ff513f53573a40c069b89d57d42338a1045b493bf702ac6b1f6 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.5],SizeBytes:60182158,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:a296b4c6f6ee2b88f095b61e95c7ef4f51ba25598835b4978c9256d8c8ace48a alpine:3.12],SizeBytes:5581415,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 04:40:57.225: INFO: Logging kubelet events for node node1 Oct 23 04:40:57.227: INFO: Logging pods the kubelet thinks is on node node1 Oct 23 04:40:57.243: INFO: nginx-proxy-node1 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:40:57.243: INFO: kube-flannel-2cdvd started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Init container install-cni ready: true, restart count 2 Oct 23 04:40:57.243: INFO: Container kube-flannel ready: true, restart count 3 Oct 23 04:40:57.243: INFO: kubernetes-metrics-scraper-5558854cb-dfn2n started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 Oct 23 04:40:57.243: INFO: node-exporter-v656r started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:40:57.243: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-sjjtd started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:40:57.243: INFO: kubernetes-dashboard-785dcbb76d-kc4kh started at 2021-10-22 21:07:01 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kubernetes-dashboard ready: true, restart count 1 Oct 23 04:40:57.243: INFO: prometheus-k8s-0 started at 2021-10-22 21:19:48 +0000 UTC (0+4 container statuses recorded) Oct 23 04:40:57.243: INFO: Container config-reloader ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container custom-metrics-apiserver ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container grafana ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container prometheus ready: true, restart count 1 Oct 23 04:40:57.243: INFO: collectd-n9sbv started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 04:40:57.243: INFO: Container collectd ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container rbac-proxy ready: true, restart count 0 Oct 23 04:40:57.243: INFO: pod-back-off-image started at 2021-10-23 04:40:55 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container back-off ready: false, restart count 0 Oct 23 04:40:57.243: INFO: liveness-exec started at 2021-10-23 04:39:58 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container liveness-exec ready: true, restart count 0 Oct 23 04:40:57.243: INFO: liveness-http started at 2021-10-23 04:39:58 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container liveness-http ready: true, restart count 1 Oct 23 04:40:57.243: INFO: back-off-cap started at 2021-10-23 04:40:30 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container back-off-cap ready: false, restart count 1 Oct 23 04:40:57.243: INFO: prometheus-operator-585ccfb458-hwjk2 started at 2021-10-22 21:19:21 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:40:57.243: INFO: Container prometheus-operator ready: true, restart count 0 Oct 23 04:40:57.243: INFO: kube-proxy-m9z8s started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:40:57.243: INFO: kube-multus-ds-amd64-l97s4 started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:40:57.243: INFO: node-feature-discovery-worker-2pvq5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.243: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:40:57.244: INFO: liveness-ba574742-1722-4e59-9ed6-de1043e6ba21 started at 2021-10-23 04:40:41 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.244: INFO: Container agnhost-container ready: true, restart count 0 Oct 23 04:40:57.244: INFO: startup-1c3c2cde-0e0a-4c7d-b756-600cc8f1c22f started at 2021-10-23 04:40:11 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.244: INFO: Container busybox ready: false, restart count 0 Oct 23 04:40:57.244: INFO: cmk-init-discover-node1-c599w started at 2021-10-22 21:17:43 +0000 UTC (0+3 container statuses recorded) Oct 23 04:40:57.244: INFO: Container discover ready: false, restart count 0 Oct 23 04:40:57.244: INFO: Container init ready: false, restart count 0 Oct 23 04:40:57.244: INFO: Container install ready: false, restart count 0 Oct 23 04:40:57.244: INFO: cmk-t9r2t started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.244: INFO: Container nodereport ready: true, restart count 0 Oct 23 04:40:57.244: INFO: Container reconcile ready: true, restart count 0 W1023 04:40:57.256310 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 04:40:57.472: INFO: Latency metrics for node node1 Oct 23 04:40:57.472: INFO: Logging node info for node node2 Oct 23 04:40:57.475: INFO: Node Info: &Node{ObjectMeta:{node2 bdba54c1-d4eb-4c09-a343-50f320ccb048 163826 0 2021-10-22 21:05:23 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.45.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2021-10-22 21:05:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2021-10-22 21:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2021-10-22 21:14:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2021-10-22 21:18:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2021-10-23 04:39:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {e2e.test Update v1 2021-10-23 04:39:57 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{450471260160 0} {} 439913340Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269628928 0} {} 196552372Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{405424133473 0} {} 405424133473 DecimalSI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884628480 0} {} 174692020Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-10-22 21:09:08 +0000 UTC,LastTransitionTime:2021-10-22 21:09:08 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:05:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-23 04:40:56 +0000 UTC,LastTransitionTime:2021-10-22 21:06:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:82312646736a4d47a5e2182417308818,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:045f38e2-ca45-4931-a8ac-a14f5e34cbd2,KernelVersion:3.10.0-1160.45.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.9,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[localhost:30500/cmk@sha256:ba2eda55192ece5488254511709b932e8a99f600af8261a9f2a89d0dbc9b8fec localhost:30500/cmk:v1.5.1],SizeBytes:723992712,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:253392289,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:58172101,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:44576952,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:c3256608afd18299ac7559d97ec0a80149d265b35d2eeeb33a053826e486886a localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42674030,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:42321438,},ContainerImage{Names:[localhost:30500/tasextender@sha256:519ce66d3ef90d7545f5679b670aa50393adfbe9785a720ba26ce3ec4b263c5d localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Oct 23 04:40:57.475: INFO: Logging kubelet events for node node2 Oct 23 04:40:57.477: INFO: Logging pods the kubelet thinks is on node node2 Oct 23 04:40:57.531: INFO: nginx-proxy-node2 started at 2021-10-22 21:05:23 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container nginx-proxy ready: true, restart count 2 Oct 23 04:40:57.531: INFO: cmk-kn29k started at 2021-10-22 21:18:25 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.531: INFO: Container nodereport ready: true, restart count 1 Oct 23 04:40:57.531: INFO: Container reconcile ready: true, restart count 0 Oct 23 04:40:57.531: INFO: cmk-init-discover-node2-2btnq started at 2021-10-22 21:18:03 +0000 UTC (0+3 container statuses recorded) Oct 23 04:40:57.531: INFO: Container discover ready: false, restart count 0 Oct 23 04:40:57.531: INFO: Container init ready: false, restart count 0 Oct 23 04:40:57.531: INFO: Container install ready: false, restart count 0 Oct 23 04:40:57.531: INFO: cmk-webhook-6c9d5f8578-pkwhc started at 2021-10-22 21:18:26 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container cmk-webhook ready: true, restart count 0 Oct 23 04:40:57.531: INFO: kube-proxy-5h2bl started at 2021-10-22 21:05:27 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container kube-proxy ready: true, restart count 2 Oct 23 04:40:57.531: INFO: node-feature-discovery-worker-8k8m5 started at 2021-10-22 21:14:11 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container nfd-worker ready: true, restart count 0 Oct 23 04:40:57.531: INFO: kube-multus-ds-amd64-fww5b started at 2021-10-22 21:06:30 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container kube-multus ready: true, restart count 1 Oct 23 04:40:57.531: INFO: node-exporter-fjc79 started at 2021-10-22 21:19:28 +0000 UTC (0+2 container statuses recorded) Oct 23 04:40:57.531: INFO: Container kube-rbac-proxy ready: true, restart count 0 Oct 23 04:40:57.531: INFO: Container node-exporter ready: true, restart count 0 Oct 23 04:40:57.531: INFO: busybox-355cbca5-0242-4466-b3ca-e278a166de3d started at 2021-10-23 04:40:54 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container busybox ready: false, restart count 0 Oct 23 04:40:57.531: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-zhcfq started at 2021-10-22 21:15:26 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container kube-sriovdp ready: true, restart count 0 Oct 23 04:40:57.531: INFO: implicit-root-uid started at 2021-10-23 04:40:46 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container implicit-root-uid ready: false, restart count 0 Oct 23 04:40:57.531: INFO: pod-prestop-hook-d22151e9-23af-4952-8aa1-813bfc47542f started at 2021-10-23 04:40:24 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container nginx ready: true, restart count 0 Oct 23 04:40:57.531: INFO: tas-telemetry-aware-scheduling-84ff454dfb-gltgg started at 2021-10-22 21:22:32 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container tas-extender ready: true, restart count 0 Oct 23 04:40:57.531: INFO: busybox-84c6ebfa-fa6e-447f-8430-b68b36f54cbe started at 2021-10-23 04:40:52 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container busybox ready: false, restart count 0 Oct 23 04:40:57.531: INFO: startup-cbb1c365-9987-4bf2-bc03-b9eb55a83f38 started at 2021-10-23 04:40:52 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container busybox ready: false, restart count 0 Oct 23 04:40:57.531: INFO: kube-flannel-xx6ls started at 2021-10-22 21:06:21 +0000 UTC (1+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Init container install-cni ready: true, restart count 1 Oct 23 04:40:57.531: INFO: Container kube-flannel ready: true, restart count 2 Oct 23 04:40:57.531: INFO: dapi-test-pod started at 2021-10-23 04:40:48 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container test-container ready: false, restart count 0 Oct 23 04:40:57.531: INFO: startup-b45d23da-f504-49a9-95db-803a8d79e862 started at 2021-10-23 04:40:53 +0000 UTC (0+1 container statuses recorded) Oct 23 04:40:57.531: INFO: Container busybox ready: false, restart count 0 Oct 23 04:40:57.531: INFO: collectd-xhdgw started at 2021-10-22 21:23:20 +0000 UTC (0+3 container statuses recorded) Oct 23 04:40:57.531: INFO: Container collectd ready: true, restart count 0 Oct 23 04:40:57.531: INFO: Container collectd-exporter ready: true, restart count 0 Oct 23 04:40:57.531: INFO: Container rbac-proxy ready: true, restart count 0 W1023 04:40:57.543077 27 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 23 04:40:58.599: INFO: Latency metrics for node node2 Oct 23 04:40:58.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7045" for this suite. •! Panic [5.778 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 Test Panicked runtime error: invalid memory address or nil pointer dereference /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x109 panic(0x653b640, 0x9beb6a0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/pod.podContainerStarted.func1(0xc000baef0c, 0x2, 0x2) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/resource.go:334 +0x17b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc0046a0f80, 0xc000baef00, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0009bde00, 0xc0046a0f80, 0xc00454d320, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:541 +0x128 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0009bde00, 0xc0046a0f80, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:427 +0x87 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0009bde00, 0xc0046a0f80, 0xc0009bde00, 0xc0046a0f80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:452 +0x74 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x77359400, 0x45d964b800, 0xc0046a0f80, 0x14, 0xc0047026f0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/framework/pod.WaitForPodContainerStarted(0x779f8f8, 0xc0045438c0, 0xc0024b3368, 0x14, 0xc0047026f0, 0x2c, 0x0, 0x45d964b800, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pod/wait.go:554 +0x92 k8s.io/kubernetes/test/e2e/common/node.glob..func2.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:417 +0x39e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003083680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003083680) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003083680, 0x70e7b58) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:48.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 23 04:40:48.196: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Oct 23 04:40:48.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2010 create -f -' Oct 23 04:40:48.646: INFO: stderr: "" Oct 23 04:40:48.646: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Oct 23 04:40:58.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2010 logs dapi-test-pod test-container' Oct 23 04:40:58.819: INFO: stderr: "" Oct 23 04:40:58.819: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2010\nMY_POD_IP=10.244.4.114\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Oct 23 04:40:58.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-2010 logs dapi-test-pod test-container' Oct 23 04:40:58.989: INFO: stderr: "" Oct 23 04:40:58.989: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2010\nMY_POD_IP=10.244.4.114\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:40:58.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2010" for this suite. • [SLOW TEST:10.834 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":3,"skipped":159,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:59.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 23 04:41:06.374: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:41:06.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5565" for this suite. • [SLOW TEST:7.087 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":4,"skipped":316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:41:06.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Oct 23 04:41:06.657: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-0d4fa03a-0f85-41c6-9f38-b0bf35760318" in namespace "security-context-test-2002" to be "Succeeded or Failed" Oct 23 04:41:06.660: INFO: Pod "alpine-nnp-nil-0d4fa03a-0f85-41c6-9f38-b0bf35760318": Phase="Pending", Reason="", readiness=false. Elapsed: 3.064306ms Oct 23 04:41:08.666: INFO: Pod "alpine-nnp-nil-0d4fa03a-0f85-41c6-9f38-b0bf35760318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00849883s Oct 23 04:41:10.674: INFO: Pod "alpine-nnp-nil-0d4fa03a-0f85-41c6-9f38-b0bf35760318": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016765932s Oct 23 04:41:12.677: INFO: Pod "alpine-nnp-nil-0d4fa03a-0f85-41c6-9f38-b0bf35760318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020102006s Oct 23 04:41:12.677: INFO: Pod "alpine-nnp-nil-0d4fa03a-0f85-41c6-9f38-b0bf35760318" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:41:12.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2002" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:39:57.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples W1023 04:39:57.667206 26 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Oct 23 04:39:57.667: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled Oct 23 04:39:57.669: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Oct 23 04:39:57.677: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Oct 23 04:39:57.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4184 create -f -' Oct 23 04:39:58.223: INFO: stderr: "" Oct 23 04:39:58.223: INFO: stdout: "pod/liveness-exec created\n" Oct 23 04:39:58.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-4184 create -f -' Oct 23 04:39:58.541: INFO: stderr: "" Oct 23 04:39:58.541: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Oct 23 04:40:12.549: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:14.550: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:14.552: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:16.554: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:16.556: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:18.558: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:18.559: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:20.562: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:20.562: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:22.565: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:22.565: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:24.573: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:24.573: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:26.576: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:26.576: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:28.579: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:28.579: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:30.583: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:30.583: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:32.588: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:32.588: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:34.593: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:34.593: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:36.598: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:36.598: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:38.604: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:38.604: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:40.608: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:40.608: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:42.612: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:42.612: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:44.620: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:44.621: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:46.624: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:46.624: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:48.629: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:48.629: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:50.633: INFO: Pod: liveness-http, restart count:0 Oct 23 04:40:50.633: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:52.636: INFO: Pod: liveness-http, restart count:1 Oct 23 04:40:52.636: INFO: Saw liveness-http restart, succeeded... Oct 23 04:40:52.636: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:54.640: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:56.643: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:40:58.648: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:00.652: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:02.655: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:04.660: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:06.663: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:08.670: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:10.674: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:12.678: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:14.683: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:16.686: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:18.692: INFO: Pod: liveness-exec, restart count:0 Oct 23 04:41:20.696: INFO: Pod: liveness-exec, restart count:1 Oct 23 04:41:20.696: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:41:20.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4184" for this suite. • [SLOW TEST:83.063 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":1,"skipped":8,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:41:12.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Oct 23 04:41:12.871: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd" in namespace "security-context-test-1551" to be "Succeeded or Failed" Oct 23 04:41:12.873: INFO: Pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067085ms Oct 23 04:41:14.876: INFO: Pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005882049s Oct 23 04:41:16.880: INFO: Pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009327241s Oct 23 04:41:18.885: INFO: Pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014177563s Oct 23 04:41:20.889: INFO: Pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd": Phase="Failed", Reason="", readiness=false. Elapsed: 8.01834791s Oct 23 04:41:20.889: INFO: Pod "busybox-readonly-true-8ed5f048-9906-4ec6-b3bc-9a407e34edcd" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:41:20.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1551" for this suite. • [SLOW TEST:8.065 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":500,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:52.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-84c6ebfa-fa6e-447f-8430-b68b36f54cbe in namespace container-probe-3810 Oct 23 04:41:00.238: INFO: Started pod busybox-84c6ebfa-fa6e-447f-8430-b68b36f54cbe in namespace container-probe-3810 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:41:00.241: INFO: Initial restart count of pod busybox-84c6ebfa-fa6e-447f-8430-b68b36f54cbe is 0 Oct 23 04:41:46.365: INFO: Restart count of pod container-probe-3810/busybox-84c6ebfa-fa6e-447f-8430-b68b36f54cbe is now 1 (46.124535665s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:41:46.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3810" for this suite. • [SLOW TEST:54.185 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":5,"skipped":549,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:54.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-355cbca5-0242-4466-b3ca-e278a166de3d in namespace container-probe-2185 Oct 23 04:41:02.367: INFO: Started pod busybox-355cbca5-0242-4466-b3ca-e278a166de3d in namespace container-probe-2185 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:41:02.371: INFO: Initial restart count of pod busybox-355cbca5-0242-4466-b3ca-e278a166de3d is 0 Oct 23 04:41:50.567: INFO: Restart count of pod container-probe-2185/busybox-355cbca5-0242-4466-b3ca-e278a166de3d is now 1 (48.196502276s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:41:50.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2185" for this suite. • [SLOW TEST:56.257 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":4,"skipped":181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:41:50.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-cedfee5d-aef0-4ec5-819c-ef0627cf048e bar STEP: verifying the node has the label fizz-8c49eab8-8aa2-4175-92d1-dc2bd30282a0 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-8c49eab8-8aa2-4175-92d1-dc2bd30282a0 off the node node2 STEP: verifying the node doesn't have the label fizz-8c49eab8-8aa2-4175-92d1-dc2bd30282a0 STEP: removing the label foo-cedfee5d-aef0-4ec5-819c-ef0627cf048e off the node node2 STEP: verifying the node doesn't have the label foo-cedfee5d-aef0-4ec5-819c-ef0627cf048e [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:00.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-133" for this suite. • [SLOW TEST:10.120 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":5,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:41:21.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Oct 23 04:41:21.064: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:23.068: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:25.067: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:27.069: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:29.068: INFO: The status of Pod master is Running (Ready = true) Oct 23 04:41:29.082: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:31.085: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:33.089: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:35.091: INFO: The status of Pod slave is Running (Ready = true) Oct 23 04:41:35.107: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:37.111: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:39.113: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:41.112: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:43.110: INFO: The status of Pod private is Running (Ready = true) Oct 23 04:41:43.125: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:45.133: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:47.131: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:41:49.134: INFO: The status of Pod default is Running (Ready = true) Oct 23 04:41:49.139: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:49.139: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:50.061: INFO: Exec stderr: "" Oct 23 04:41:50.064: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:50.064: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:50.360: INFO: Exec stderr: "" Oct 23 04:41:50.363: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:50.363: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:50.621: INFO: Exec stderr: "" Oct 23 04:41:50.623: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:50.623: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:50.851: INFO: Exec stderr: "" Oct 23 04:41:50.853: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:50.853: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:51.106: INFO: Exec stderr: "" Oct 23 04:41:51.109: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:51.109: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:51.241: INFO: Exec stderr: "" Oct 23 04:41:51.245: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:51.245: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:51.347: INFO: Exec stderr: "" Oct 23 04:41:51.352: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:51.352: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:52.099: INFO: Exec stderr: "" Oct 23 04:41:52.101: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:52.101: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:52.217: INFO: Exec stderr: "" Oct 23 04:41:52.220: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:52.220: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:52.460: INFO: Exec stderr: "" Oct 23 04:41:52.462: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:52.462: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:52.883: INFO: Exec stderr: "" Oct 23 04:41:52.885: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:52.885: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:53.188: INFO: Exec stderr: "" Oct 23 04:41:53.191: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:53.191: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:53.462: INFO: Exec stderr: "" Oct 23 04:41:53.465: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:53.465: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:53.698: INFO: Exec stderr: "" Oct 23 04:41:53.700: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:53.700: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:53.858: INFO: Exec stderr: "" Oct 23 04:41:53.861: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:53.861: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:54.031: INFO: Exec stderr: "" Oct 23 04:41:54.034: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:54.034: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:54.141: INFO: Exec stderr: "" Oct 23 04:41:54.144: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:54.144: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:54.261: INFO: Exec stderr: "" Oct 23 04:41:54.263: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:54.263: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:54.353: INFO: Exec stderr: "" Oct 23 04:41:54.357: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:54.357: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:54.450: INFO: Exec stderr: "" Oct 23 04:41:56.474: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-1536"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-1536"/host; echo host > "/var/lib/kubelet/mount-propagation-1536"/host/file] Namespace:mount-propagation-1536 PodName:hostexec-node2-gc6j5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:41:56.474: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:56.573: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:56.573: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:56.663: INFO: pod master mount master: stdout: "master", stderr: "" error: Oct 23 04:41:56.666: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:56.666: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:56.779: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:56.781: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:56.781: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:56.999: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:57.002: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:57.002: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:57.485: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:57.488: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:57.488: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:57.597: INFO: pod master mount host: stdout: "host", stderr: "" error: Oct 23 04:41:57.599: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:57.599: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:57.754: INFO: pod slave mount master: stdout: "master", stderr: "" error: Oct 23 04:41:57.757: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:57.757: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:57.930: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Oct 23 04:41:57.932: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:57.932: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.121: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:58.124: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.124: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.480: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:58.483: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.483: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.568: INFO: pod slave mount host: stdout: "host", stderr: "" error: Oct 23 04:41:58.571: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.571: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.647: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:58.650: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.650: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.750: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:58.752: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.752: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.845: INFO: pod private mount private: stdout: "private", stderr: "" error: Oct 23 04:41:58.848: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.848: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:58.993: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:58.995: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:58.995: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.104: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:59.106: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.106: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.217: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:59.220: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.220: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.327: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:59.330: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.331: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.413: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:59.417: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.417: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.495: INFO: pod default mount default: stdout: "default", stderr: "" error: Oct 23 04:41:59.497: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.497: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.584: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Oct 23 04:41:59.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-1536"/master/file` = master] Namespace:mount-propagation-1536 PodName:hostexec-node2-gc6j5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:41:59.585: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.671: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-1536"/slave/file] Namespace:mount-propagation-1536 PodName:hostexec-node2-gc6j5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:41:59.672: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.753: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-1536"/host] Namespace:mount-propagation-1536 PodName:hostexec-node2-gc6j5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:41:59.753: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.878: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-1536 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.878: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:41:59.973: INFO: Exec stderr: "" Oct 23 04:41:59.976: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-1536 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:41:59.976: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:42:00.196: INFO: Exec stderr: "" Oct 23 04:42:00.199: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-1536 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:42:00.199: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:42:00.453: INFO: Exec stderr: "" Oct 23 04:42:00.456: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-1536 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Oct 23 04:42:00.456: INFO: >>> kubeConfig: /root/.kube/config Oct 23 04:42:00.767: INFO: Exec stderr: "" Oct 23 04:42:00.767: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-1536"] Namespace:mount-propagation-1536 PodName:hostexec-node2-gc6j5 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Oct 23 04:42:00.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-gc6j5 in namespace mount-propagation-1536 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:00.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-1536" for this suite. • [SLOW TEST:39.845 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":7,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:53.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-b45d23da-f504-49a9-95db-803a8d79e862 in namespace container-probe-1405 Oct 23 04:41:01.499: INFO: Started pod startup-b45d23da-f504-49a9-95db-803a8d79e862 in namespace container-probe-1405 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:41:01.501: INFO: Initial restart count of pod startup-b45d23da-f504-49a9-95db-803a8d79e862 is 0 Oct 23 04:42:05.631: INFO: Restart count of pod container-probe-1405/startup-b45d23da-f504-49a9-95db-803a8d79e862 is now 1 (1m4.129865827s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:05.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1405" for this suite. • [SLOW TEST:72.190 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":7,"skipped":850,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:42:05.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Oct 23 04:42:05.925: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:05.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-4371" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.039 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:41:46.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-f614bd5b-c49e-4a94-9438-d958a214a7af in namespace container-probe-6278 Oct 23 04:41:54.558: INFO: Started pod liveness-f614bd5b-c49e-4a94-9438-d958a214a7af in namespace container-probe-6278 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:41:54.561: INFO: Initial restart count of pod liveness-f614bd5b-c49e-4a94-9438-d958a214a7af is 0 Oct 23 04:42:10.604: INFO: Restart count of pod container-probe-6278/liveness-f614bd5b-c49e-4a94-9438-d958a214a7af is now 1 (16.042732521s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:10.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6278" for this suite. • [SLOW TEST:24.106 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":6,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:42:10.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:16.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9279" for this suite. • [SLOW TEST:6.098 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":7,"skipped":663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:42:17.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-336" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":8,"skipped":768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ Oct 23 04:42:19.811: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:42:01.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true Oct 23 04:42:18.078: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:42:20.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3875" for this suite. • [SLOW TEST:19.079 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:42:00.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-8699bb80-d472-45b2-84e0-4f157d8aeb23 in namespace container-probe-9000 Oct 23 04:42:04.911: INFO: Started pod startup-8699bb80-d472-45b2-84e0-4f157d8aeb23 in namespace container-probe-9000 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:42:04.913: INFO: Initial restart count of pod startup-8699bb80-d472-45b2-84e0-4f157d8aeb23 is 0 Oct 23 04:43:03.035: INFO: Restart count of pod container-probe-9000/startup-8699bb80-d472-45b2-84e0-4f157d8aeb23 is now 1 (58.121219893s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:43:03.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9000" for this suite. • [SLOW TEST:62.176 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":6,"skipped":260,"failed":0} Oct 23 04:43:03.050: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:42:05.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 in namespace container-probe-7384 Oct 23 04:42:11.980: INFO: Started pod busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 in namespace container-probe-7384 Oct 23 04:42:11.980: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (922ns elapsed) Oct 23 04:42:13.981: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (2.001410456s elapsed) Oct 23 04:42:15.982: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (4.002152352s elapsed) Oct 23 04:42:17.982: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (6.002614902s elapsed) Oct 23 04:42:19.984: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (8.004157357s elapsed) Oct 23 04:42:21.985: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (10.005602162s elapsed) Oct 23 04:42:23.988: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (12.007870135s elapsed) Oct 23 04:42:25.988: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (14.008047317s elapsed) Oct 23 04:42:27.989: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (16.00968465s elapsed) Oct 23 04:42:29.992: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (18.011886848s elapsed) Oct 23 04:42:31.993: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (20.013625955s elapsed) Oct 23 04:42:33.996: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (22.016007224s elapsed) Oct 23 04:42:35.996: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (24.016454975s elapsed) Oct 23 04:42:37.997: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (26.017328791s elapsed) Oct 23 04:42:39.998: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (28.018729363s elapsed) Oct 23 04:42:41.999: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (30.019551627s elapsed) Oct 23 04:42:44.001: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (32.02095674s elapsed) Oct 23 04:42:46.002: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (34.021917961s elapsed) Oct 23 04:42:48.002: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (36.022322282s elapsed) Oct 23 04:42:50.003: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (38.023574687s elapsed) Oct 23 04:42:52.005: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (40.025722678s elapsed) Oct 23 04:42:54.007: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (42.027717188s elapsed) Oct 23 04:42:56.008: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (44.028214889s elapsed) Oct 23 04:42:58.009: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (46.029500694s elapsed) Oct 23 04:43:00.010: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (48.030735297s elapsed) Oct 23 04:43:02.012: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (50.032211747s elapsed) Oct 23 04:43:04.013: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (52.033638108s elapsed) Oct 23 04:43:06.014: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (54.034412389s elapsed) Oct 23 04:43:08.015: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (56.03481285s elapsed) Oct 23 04:43:10.016: INFO: pod container-probe-7384/busybox-6e40f963-9165-411e-ac9d-fb1bac2e51d4 is not ready (58.03604882s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:43:12.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7384" for this suite. • [SLOW TEST:66.091 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":8,"skipped":976,"failed":0} Oct 23 04:43:12.035: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:58.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Oct 23 04:41:07.427: INFO: watch delete seen for pod-submit-status-2-0 Oct 23 04:41:07.427: INFO: Pod pod-submit-status-2-0 on node node2 timings total=8.719115265s t=675ms run=0s execute=0s Oct 23 04:41:14.289: INFO: watch delete seen for pod-submit-status-2-1 Oct 23 04:41:14.289: INFO: Pod pod-submit-status-2-1 on node node2 timings total=6.862052377s t=247ms run=0s execute=0s Oct 23 04:41:14.297: INFO: watch delete seen for pod-submit-status-0-0 Oct 23 04:41:14.297: INFO: Pod pod-submit-status-0-0 on node node2 timings total=15.589718998s t=1.024s run=0s execute=0s Oct 23 04:41:14.309: INFO: watch delete seen for pod-submit-status-1-0 Oct 23 04:41:14.309: INFO: Pod pod-submit-status-1-0 on node node2 timings total=15.601125703s t=1.429s run=0s execute=0s Oct 23 04:41:16.892: INFO: watch delete seen for pod-submit-status-2-2 Oct 23 04:41:16.893: INFO: Pod pod-submit-status-2-2 on node node2 timings total=2.603618769s t=1.135s run=0s execute=0s Oct 23 04:41:20.825: INFO: watch delete seen for pod-submit-status-1-1 Oct 23 04:41:20.826: INFO: Pod pod-submit-status-1-1 on node node2 timings total=6.516641447s t=1.342s run=0s execute=0s Oct 23 04:41:23.267: INFO: watch delete seen for pod-submit-status-1-2 Oct 23 04:41:23.267: INFO: Pod pod-submit-status-1-2 on node node2 timings total=2.441498419s t=866ms run=0s execute=0s Oct 23 04:41:24.424: INFO: watch delete seen for pod-submit-status-2-3 Oct 23 04:41:24.424: INFO: Pod pod-submit-status-2-3 on node node2 timings total=7.531200581s t=995ms run=0s execute=0s Oct 23 04:41:24.825: INFO: watch delete seen for pod-submit-status-0-1 Oct 23 04:41:24.825: INFO: Pod pod-submit-status-0-1 on node node2 timings total=10.527946303s t=1.83s run=0s execute=0s Oct 23 04:41:25.623: INFO: watch delete seen for pod-submit-status-1-3 Oct 23 04:41:25.623: INFO: Pod pod-submit-status-1-3 on node node2 timings total=2.355939845s t=138ms run=0s execute=0s Oct 23 04:41:26.823: INFO: watch delete seen for pod-submit-status-0-2 Oct 23 04:41:26.823: INFO: Pod pod-submit-status-0-2 on node node2 timings total=1.997969682s t=14ms run=0s execute=0s Oct 23 04:41:29.224: INFO: watch delete seen for pod-submit-status-2-4 Oct 23 04:41:29.224: INFO: Pod pod-submit-status-2-4 on node node2 timings total=4.800648849s t=455ms run=0s execute=0s Oct 23 04:41:32.426: INFO: watch delete seen for pod-submit-status-1-4 Oct 23 04:41:32.426: INFO: Pod pod-submit-status-1-4 on node node2 timings total=6.802958428s t=1.74s run=0s execute=0s Oct 23 04:41:35.426: INFO: watch delete seen for pod-submit-status-0-3 Oct 23 04:41:35.426: INFO: Pod pod-submit-status-0-3 on node node2 timings total=8.602585193s t=1.998s run=0s execute=0s Oct 23 04:41:36.023: INFO: watch delete seen for pod-submit-status-2-5 Oct 23 04:41:36.023: INFO: Pod pod-submit-status-2-5 on node node2 timings total=6.798695858s t=1.477s run=0s execute=0s Oct 23 04:41:39.025: INFO: watch delete seen for pod-submit-status-2-6 Oct 23 04:41:39.025: INFO: Pod pod-submit-status-2-6 on node node2 timings total=3.002177015s t=1.533s run=0s execute=0s Oct 23 04:41:42.224: INFO: watch delete seen for pod-submit-status-2-7 Oct 23 04:41:42.224: INFO: Pod pod-submit-status-2-7 on node node2 timings total=3.198360582s t=493ms run=0s execute=0s Oct 23 04:41:43.424: INFO: watch delete seen for pod-submit-status-0-4 Oct 23 04:41:43.424: INFO: Pod pod-submit-status-0-4 on node node2 timings total=7.997472474s t=954ms run=0s execute=0s Oct 23 04:41:49.997: INFO: watch delete seen for pod-submit-status-2-8 Oct 23 04:41:49.997: INFO: Pod pod-submit-status-2-8 on node node2 timings total=7.772925225s t=1.579s run=0s execute=0s Oct 23 04:42:04.203: INFO: watch delete seen for pod-submit-status-2-9 Oct 23 04:42:04.203: INFO: Pod pod-submit-status-2-9 on node node2 timings total=14.205671653s t=1.317s run=0s execute=0s Oct 23 04:42:14.243: INFO: watch delete seen for pod-submit-status-2-10 Oct 23 04:42:14.243: INFO: Pod pod-submit-status-2-10 on node node2 timings total=10.040282589s t=1.219s run=0s execute=0s Oct 23 04:42:24.201: INFO: watch delete seen for pod-submit-status-2-11 Oct 23 04:42:24.201: INFO: Pod pod-submit-status-2-11 on node node2 timings total=9.95816942s t=1.488s run=0s execute=0s Oct 23 04:42:28.818: INFO: watch delete seen for pod-submit-status-1-5 Oct 23 04:42:28.819: INFO: Pod pod-submit-status-1-5 on node node2 timings total=56.392352443s t=1.346s run=0s execute=0s Oct 23 04:42:28.827: INFO: watch delete seen for pod-submit-status-0-5 Oct 23 04:42:28.827: INFO: Pod pod-submit-status-0-5 on node node2 timings total=45.40372294s t=341ms run=0s execute=0s Oct 23 04:42:34.204: INFO: watch delete seen for pod-submit-status-2-12 Oct 23 04:42:34.204: INFO: Pod pod-submit-status-2-12 on node node2 timings total=10.002745059s t=1.859s run=0s execute=0s Oct 23 04:42:44.202: INFO: watch delete seen for pod-submit-status-0-6 Oct 23 04:42:44.202: INFO: Pod pod-submit-status-0-6 on node node2 timings total=15.374171418s t=1.996s run=3s execute=0s Oct 23 04:42:44.209: INFO: watch delete seen for pod-submit-status-2-13 Oct 23 04:42:44.209: INFO: Pod pod-submit-status-2-13 on node node2 timings total=10.004900246s t=1.795s run=0s execute=0s Oct 23 04:42:44.218: INFO: watch delete seen for pod-submit-status-1-6 Oct 23 04:42:44.218: INFO: Pod pod-submit-status-1-6 on node node2 timings total=15.3993378s t=1.563s run=0s execute=0s Oct 23 04:42:50.677: INFO: watch delete seen for pod-submit-status-0-7 Oct 23 04:42:50.678: INFO: Pod pod-submit-status-0-7 on node node2 timings total=6.475872644s t=378ms run=0s execute=0s Oct 23 04:42:54.203: INFO: watch delete seen for pod-submit-status-2-14 Oct 23 04:42:54.203: INFO: Pod pod-submit-status-2-14 on node node2 timings total=9.993908528s t=1.72s run=0s execute=0s Oct 23 04:42:54.215: INFO: watch delete seen for pod-submit-status-1-7 Oct 23 04:42:54.215: INFO: Pod pod-submit-status-1-7 on node node2 timings total=9.997068522s t=1.969s run=0s execute=0s Oct 23 04:43:04.196: INFO: watch delete seen for pod-submit-status-1-8 Oct 23 04:43:04.196: INFO: Pod pod-submit-status-1-8 on node node2 timings total=9.980705325s t=773ms run=0s execute=0s Oct 23 04:43:04.205: INFO: watch delete seen for pod-submit-status-0-8 Oct 23 04:43:04.205: INFO: Pod pod-submit-status-0-8 on node node2 timings total=13.527848761s t=309ms run=0s execute=0s Oct 23 04:43:14.203: INFO: watch delete seen for pod-submit-status-0-9 Oct 23 04:43:14.203: INFO: Pod pod-submit-status-0-9 on node node2 timings total=9.997921888s t=1.055s run=0s execute=0s Oct 23 04:43:24.202: INFO: watch delete seen for pod-submit-status-0-10 Oct 23 04:43:24.202: INFO: Pod pod-submit-status-0-10 on node node2 timings total=9.998391873s t=1.907s run=0s execute=0s Oct 23 04:43:29.212: INFO: watch delete seen for pod-submit-status-1-9 Oct 23 04:43:29.212: INFO: Pod pod-submit-status-1-9 on node node2 timings total=25.016161045s t=1.211s run=0s execute=0s Oct 23 04:43:30.733: INFO: watch delete seen for pod-submit-status-1-10 Oct 23 04:43:30.733: INFO: Pod pod-submit-status-1-10 on node node2 timings total=1.520479844s t=609ms run=0s execute=0s Oct 23 04:43:34.286: INFO: watch delete seen for pod-submit-status-0-11 Oct 23 04:43:34.286: INFO: Pod pod-submit-status-0-11 on node node2 timings total=10.084058627s t=631ms run=0s execute=0s Oct 23 04:43:44.201: INFO: watch delete seen for pod-submit-status-1-11 Oct 23 04:43:44.201: INFO: Pod pod-submit-status-1-11 on node node2 timings total=13.468514417s t=1.755s run=2s execute=0s Oct 23 04:43:44.209: INFO: watch delete seen for pod-submit-status-0-12 Oct 23 04:43:44.209: INFO: Pod pod-submit-status-0-12 on node node2 timings total=9.923398599s t=1.596s run=0s execute=0s Oct 23 04:43:54.200: INFO: watch delete seen for pod-submit-status-1-12 Oct 23 04:43:54.200: INFO: Pod pod-submit-status-1-12 on node node2 timings total=9.999233854s t=122ms run=0s execute=0s Oct 23 04:44:04.202: INFO: watch delete seen for pod-submit-status-1-13 Oct 23 04:44:04.202: INFO: Pod pod-submit-status-1-13 on node node2 timings total=10.001094317s t=735ms run=0s execute=0s Oct 23 04:44:04.212: INFO: watch delete seen for pod-submit-status-0-13 Oct 23 04:44:04.212: INFO: Pod pod-submit-status-0-13 on node node2 timings total=20.002100799s t=1.696s run=0s execute=0s Oct 23 04:44:14.211: INFO: watch delete seen for pod-submit-status-1-14 Oct 23 04:44:14.212: INFO: Pod pod-submit-status-1-14 on node node2 timings total=10.009856902s t=1.942s run=0s execute=0s Oct 23 04:44:14.220: INFO: watch delete seen for pod-submit-status-0-14 Oct 23 04:44:14.220: INFO: Pod pod-submit-status-0-14 on node node2 timings total=10.008472202s t=199ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:44:14.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8481" for this suite. • [SLOW TEST:195.545 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":6,"skipped":751,"failed":0} Oct 23 04:44:14.233: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:11.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-1c3c2cde-0e0a-4c7d-b756-600cc8f1c22f in namespace container-probe-3901 Oct 23 04:40:17.294: INFO: Started pod startup-1c3c2cde-0e0a-4c7d-b756-600cc8f1c22f in namespace container-probe-3901 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:40:17.296: INFO: Initial restart count of pod startup-1c3c2cde-0e0a-4c7d-b756-600cc8f1c22f is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:44:17.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3901" for this suite. • [SLOW TEST:246.598 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":2,"skipped":447,"failed":0} Oct 23 04:44:17.849: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:41.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-ba574742-1722-4e59-9ed6-de1043e6ba21 in namespace container-probe-4719 Oct 23 04:40:45.734: INFO: Started pod liveness-ba574742-1722-4e59-9ed6-de1043e6ba21 in namespace container-probe-4719 STEP: checking the pod's current state and verifying that restartCount is present Oct 23 04:40:45.736: INFO: Initial restart count of pod liveness-ba574742-1722-4e59-9ed6-de1043e6ba21 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:44:46.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4719" for this suite. • [SLOW TEST:244.655 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":6,"skipped":777,"failed":0} Oct 23 04:44:46.338: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:55.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 Oct 23 04:40:55.076: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:57.080: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:59.082: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Oct 23 04:42:06.113: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-10-23 04:41:30 +0000 UTC restartedAt=2021-10-23 04:42:04 +0000 UTC (34s) STEP: getting restart delay-1 Oct 23 04:42:52.286: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-10-23 04:42:09 +0000 UTC restartedAt=2021-10-23 04:42:51 +0000 UTC (42s) STEP: getting restart delay-2 Oct 23 04:44:26.651: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-10-23 04:42:56 +0000 UTC restartedAt=2021-10-23 04:44:25 +0000 UTC (1m29s) STEP: updating the image Oct 23 04:44:27.162: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Oct 23 04:44:51.218: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-10-23 04:44:36 +0000 UTC restartedAt=2021-10-23 04:44:48 +0000 UTC (12s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:44:51.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5805" for this suite. • [SLOW TEST:236.187 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":3,"skipped":366,"failed":0} Oct 23 04:44:51.230: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:41:20.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Oct 23 04:41:20.743: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Oct 23 04:41:21.755: INFO: node status heartbeat is unchanged for 1.004233909s, waiting for 1m20s Oct 23 04:41:22.754: INFO: node status heartbeat is unchanged for 2.003687271s, waiting for 1m20s Oct 23 04:41:23.755: INFO: node status heartbeat is unchanged for 3.00426768s, waiting for 1m20s Oct 23 04:41:24.755: INFO: node status heartbeat is unchanged for 4.004020698s, waiting for 1m20s Oct 23 04:41:25.755: INFO: node status heartbeat is unchanged for 5.004192072s, waiting for 1m20s Oct 23 04:41:26.754: INFO: node status heartbeat is unchanged for 6.003951834s, waiting for 1m20s Oct 23 04:41:27.755: INFO: node status heartbeat is unchanged for 7.004672714s, waiting for 1m20s Oct 23 04:41:28.754: INFO: node status heartbeat is unchanged for 8.003576478s, waiting for 1m20s Oct 23 04:41:29.755: INFO: node status heartbeat is unchanged for 9.004349827s, waiting for 1m20s Oct 23 04:41:30.754: INFO: node status heartbeat is unchanged for 10.003581658s, waiting for 1m20s Oct 23 04:41:31.755: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:41:31.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:20 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:41:32.755: INFO: node status heartbeat is unchanged for 999.674454ms, waiting for 1m20s Oct 23 04:41:33.755: INFO: node status heartbeat is unchanged for 1.99998301s, waiting for 1m20s Oct 23 04:41:34.755: INFO: node status heartbeat is unchanged for 2.999671598s, waiting for 1m20s Oct 23 04:41:35.755: INFO: node status heartbeat is unchanged for 4.000138078s, waiting for 1m20s Oct 23 04:41:36.755: INFO: node status heartbeat is unchanged for 4.999607684s, waiting for 1m20s Oct 23 04:41:37.754: INFO: node status heartbeat is unchanged for 5.999370379s, waiting for 1m20s Oct 23 04:41:38.757: INFO: node status heartbeat is unchanged for 7.001992437s, waiting for 1m20s Oct 23 04:41:39.755: INFO: node status heartbeat is unchanged for 8.00024671s, waiting for 1m20s Oct 23 04:41:40.755: INFO: node status heartbeat is unchanged for 8.999819466s, waiting for 1m20s Oct 23 04:41:41.754: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:41:41.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:41 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:41 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:41 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:41:42.756: INFO: node status heartbeat is unchanged for 1.001492357s, waiting for 1m20s Oct 23 04:41:43.755: INFO: node status heartbeat is unchanged for 2.000452531s, waiting for 1m20s Oct 23 04:41:44.755: INFO: node status heartbeat is unchanged for 3.000531156s, waiting for 1m20s Oct 23 04:41:45.755: INFO: node status heartbeat is unchanged for 4.000692598s, waiting for 1m20s Oct 23 04:41:46.755: INFO: node status heartbeat is unchanged for 5.000931142s, waiting for 1m20s Oct 23 04:41:47.754: INFO: node status heartbeat is unchanged for 5.999740561s, waiting for 1m20s Oct 23 04:41:48.756: INFO: node status heartbeat is unchanged for 7.001531172s, waiting for 1m20s Oct 23 04:41:49.756: INFO: node status heartbeat is unchanged for 8.001350575s, waiting for 1m20s Oct 23 04:41:50.755: INFO: node status heartbeat is unchanged for 9.000728771s, waiting for 1m20s Oct 23 04:41:51.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:41:51.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:41 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:51 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:41:52.754: INFO: node status heartbeat is unchanged for 999.907393ms, waiting for 1m20s Oct 23 04:41:53.755: INFO: node status heartbeat is unchanged for 2.000198975s, waiting for 1m20s Oct 23 04:41:54.754: INFO: node status heartbeat is unchanged for 2.999967053s, waiting for 1m20s Oct 23 04:41:55.756: INFO: node status heartbeat is unchanged for 4.001845894s, waiting for 1m20s Oct 23 04:41:56.755: INFO: node status heartbeat is unchanged for 5.000654353s, waiting for 1m20s Oct 23 04:41:57.755: INFO: node status heartbeat is unchanged for 6.000068339s, waiting for 1m20s Oct 23 04:41:58.754: INFO: node status heartbeat is unchanged for 6.999705458s, waiting for 1m20s Oct 23 04:41:59.754: INFO: node status heartbeat is unchanged for 7.99953331s, waiting for 1m20s Oct 23 04:42:00.756: INFO: node status heartbeat is unchanged for 9.001439048s, waiting for 1m20s Oct 23 04:42:01.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:42:01.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:01 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:01 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:41:51 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:01 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:42:02.755: INFO: node status heartbeat is unchanged for 998.37409ms, waiting for 1m20s Oct 23 04:42:03.758: INFO: node status heartbeat is unchanged for 2.001568279s, waiting for 1m20s Oct 23 04:42:04.758: INFO: node status heartbeat is unchanged for 3.001664557s, waiting for 1m20s Oct 23 04:42:05.755: INFO: node status heartbeat is unchanged for 3.998824009s, waiting for 1m20s Oct 23 04:42:06.755: INFO: node status heartbeat is unchanged for 4.999211778s, waiting for 1m20s Oct 23 04:42:07.754: INFO: node status heartbeat is unchanged for 5.998332264s, waiting for 1m20s Oct 23 04:42:08.756: INFO: node status heartbeat is unchanged for 6.999661762s, waiting for 1m20s Oct 23 04:42:09.755: INFO: node status heartbeat is unchanged for 7.99855309s, waiting for 1m20s Oct 23 04:42:10.757: INFO: node status heartbeat is unchanged for 9.000360618s, waiting for 1m20s Oct 23 04:42:11.757: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:42:11.762: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:11 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:11 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:01 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:11 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:42:12.754: INFO: node status heartbeat is unchanged for 997.33121ms, waiting for 1m20s Oct 23 04:42:13.755: INFO: node status heartbeat is unchanged for 1.99797407s, waiting for 1m20s Oct 23 04:42:14.757: INFO: node status heartbeat is unchanged for 3.000013508s, waiting for 1m20s Oct 23 04:42:15.756: INFO: node status heartbeat is unchanged for 3.999238201s, waiting for 1m20s Oct 23 04:42:16.756: INFO: node status heartbeat is unchanged for 4.999672426s, waiting for 1m20s Oct 23 04:42:17.754: INFO: node status heartbeat is unchanged for 5.997223444s, waiting for 1m20s Oct 23 04:42:18.758: INFO: node status heartbeat is unchanged for 7.001521387s, waiting for 1m20s Oct 23 04:42:19.756: INFO: node status heartbeat is unchanged for 7.999437862s, waiting for 1m20s Oct 23 04:42:20.758: INFO: node status heartbeat is unchanged for 9.000821681s, waiting for 1m20s Oct 23 04:42:21.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:42:21.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:21 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:21 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:11 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:21 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:42:22.755: INFO: node status heartbeat is unchanged for 999.865056ms, waiting for 1m20s Oct 23 04:42:23.756: INFO: node status heartbeat is unchanged for 2.001009491s, waiting for 1m20s Oct 23 04:42:24.757: INFO: node status heartbeat is unchanged for 3.001391518s, waiting for 1m20s Oct 23 04:42:25.755: INFO: node status heartbeat is unchanged for 3.999624545s, waiting for 1m20s Oct 23 04:42:26.756: INFO: node status heartbeat is unchanged for 5.000486046s, waiting for 1m20s Oct 23 04:42:27.755: INFO: node status heartbeat is unchanged for 5.999494903s, waiting for 1m20s Oct 23 04:42:28.755: INFO: node status heartbeat is unchanged for 6.999648283s, waiting for 1m20s Oct 23 04:42:29.755: INFO: node status heartbeat is unchanged for 7.999112745s, waiting for 1m20s Oct 23 04:42:30.755: INFO: node status heartbeat is unchanged for 8.999248426s, waiting for 1m20s Oct 23 04:42:31.754: INFO: node status heartbeat is unchanged for 9.998366111s, waiting for 1m20s Oct 23 04:42:32.754: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:42:32.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:21 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:31 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:42:33.755: INFO: node status heartbeat is unchanged for 1.001091941s, waiting for 1m20s Oct 23 04:42:34.754: INFO: node status heartbeat is unchanged for 2.000024655s, waiting for 1m20s Oct 23 04:42:35.755: INFO: node status heartbeat is unchanged for 3.001436194s, waiting for 1m20s Oct 23 04:42:36.754: INFO: node status heartbeat is unchanged for 4.000406273s, waiting for 1m20s Oct 23 04:42:37.754: INFO: node status heartbeat is unchanged for 5.000602092s, waiting for 1m20s Oct 23 04:42:38.754: INFO: node status heartbeat is unchanged for 6.000424396s, waiting for 1m20s Oct 23 04:42:39.755: INFO: node status heartbeat is unchanged for 7.000945928s, waiting for 1m20s Oct 23 04:42:40.755: INFO: node status heartbeat is unchanged for 8.00101104s, waiting for 1m20s Oct 23 04:42:41.755: INFO: node status heartbeat is unchanged for 9.00103451s, waiting for 1m20s Oct 23 04:42:42.787: INFO: node status heartbeat is unchanged for 10.032821595s, waiting for 1m20s Oct 23 04:42:43.755: INFO: node status heartbeat is unchanged for 11.000888162s, waiting for 1m20s Oct 23 04:42:44.755: INFO: node status heartbeat is unchanged for 12.000825438s, waiting for 1m20s Oct 23 04:42:45.754: INFO: node status heartbeat changed in 14s (with other status changes), waiting for 40s Oct 23 04:42:45.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:31 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:42:46.755: INFO: node status heartbeat is unchanged for 1.001043315s, waiting for 1m20s Oct 23 04:42:47.755: INFO: node status heartbeat is unchanged for 2.000706813s, waiting for 1m20s Oct 23 04:42:48.758: INFO: node status heartbeat is unchanged for 3.003549902s, waiting for 1m20s Oct 23 04:42:49.756: INFO: node status heartbeat is unchanged for 4.001637572s, waiting for 1m20s Oct 23 04:42:50.755: INFO: node status heartbeat is unchanged for 5.000690208s, waiting for 1m20s Oct 23 04:42:51.754: INFO: node status heartbeat is unchanged for 5.999752103s, waiting for 1m20s Oct 23 04:42:52.755: INFO: node status heartbeat is unchanged for 7.000424283s, waiting for 1m20s Oct 23 04:42:53.755: INFO: node status heartbeat is unchanged for 8.0011076s, waiting for 1m20s Oct 23 04:42:54.755: INFO: node status heartbeat is unchanged for 9.000379594s, waiting for 1m20s Oct 23 04:42:55.754: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:42:55.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:42:56.755: INFO: node status heartbeat is unchanged for 1.000452574s, waiting for 1m20s Oct 23 04:42:57.755: INFO: node status heartbeat is unchanged for 2.000589619s, waiting for 1m20s Oct 23 04:42:58.756: INFO: node status heartbeat is unchanged for 3.001279226s, waiting for 1m20s Oct 23 04:42:59.754: INFO: node status heartbeat is unchanged for 4.000163788s, waiting for 1m20s Oct 23 04:43:00.756: INFO: node status heartbeat is unchanged for 5.001645158s, waiting for 1m20s Oct 23 04:43:01.755: INFO: node status heartbeat is unchanged for 6.000418046s, waiting for 1m20s Oct 23 04:43:02.755: INFO: node status heartbeat is unchanged for 7.000568212s, waiting for 1m20s Oct 23 04:43:03.754: INFO: node status heartbeat is unchanged for 7.999501528s, waiting for 1m20s Oct 23 04:43:04.755: INFO: node status heartbeat is unchanged for 9.000483488s, waiting for 1m20s Oct 23 04:43:05.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:43:05.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:42:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:43:06.755: INFO: node status heartbeat is unchanged for 999.672173ms, waiting for 1m20s Oct 23 04:43:07.755: INFO: node status heartbeat is unchanged for 1.999795835s, waiting for 1m20s Oct 23 04:43:08.755: INFO: node status heartbeat is unchanged for 2.999648988s, waiting for 1m20s Oct 23 04:43:09.754: INFO: node status heartbeat is unchanged for 3.999579521s, waiting for 1m20s Oct 23 04:43:10.758: INFO: node status heartbeat is unchanged for 5.002848319s, waiting for 1m20s Oct 23 04:43:11.756: INFO: node status heartbeat is unchanged for 6.000944561s, waiting for 1m20s Oct 23 04:43:12.755: INFO: node status heartbeat is unchanged for 6.999806658s, waiting for 1m20s Oct 23 04:43:13.755: INFO: node status heartbeat is unchanged for 8.000536746s, waiting for 1m20s Oct 23 04:43:14.755: INFO: node status heartbeat is unchanged for 8.999845435s, waiting for 1m20s Oct 23 04:43:15.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:43:15.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:43:16.754: INFO: node status heartbeat is unchanged for 998.948966ms, waiting for 1m20s Oct 23 04:43:17.755: INFO: node status heartbeat is unchanged for 1.999964643s, waiting for 1m20s Oct 23 04:43:18.754: INFO: node status heartbeat is unchanged for 2.998973723s, waiting for 1m20s Oct 23 04:43:19.754: INFO: node status heartbeat is unchanged for 3.998956938s, waiting for 1m20s Oct 23 04:43:20.755: INFO: node status heartbeat is unchanged for 4.999348957s, waiting for 1m20s Oct 23 04:43:21.754: INFO: node status heartbeat is unchanged for 5.999086977s, waiting for 1m20s Oct 23 04:43:22.755: INFO: node status heartbeat is unchanged for 6.999759325s, waiting for 1m20s Oct 23 04:43:23.755: INFO: node status heartbeat is unchanged for 7.999412685s, waiting for 1m20s Oct 23 04:43:24.755: INFO: node status heartbeat is unchanged for 9.000144681s, waiting for 1m20s Oct 23 04:43:25.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:43:25.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:43:26.754: INFO: node status heartbeat is unchanged for 999.520614ms, waiting for 1m20s Oct 23 04:43:27.755: INFO: node status heartbeat is unchanged for 2.000200072s, waiting for 1m20s Oct 23 04:43:28.755: INFO: node status heartbeat is unchanged for 2.999789135s, waiting for 1m20s Oct 23 04:43:29.755: INFO: node status heartbeat is unchanged for 3.999869949s, waiting for 1m20s Oct 23 04:43:30.755: INFO: node status heartbeat is unchanged for 4.999778094s, waiting for 1m20s Oct 23 04:43:31.754: INFO: node status heartbeat is unchanged for 5.998915639s, waiting for 1m20s Oct 23 04:43:32.754: INFO: node status heartbeat is unchanged for 6.999038543s, waiting for 1m20s Oct 23 04:43:33.754: INFO: node status heartbeat is unchanged for 7.999061246s, waiting for 1m20s Oct 23 04:43:34.755: INFO: node status heartbeat is unchanged for 9.000452548s, waiting for 1m20s Oct 23 04:43:35.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:43:35.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:43:36.755: INFO: node status heartbeat is unchanged for 1.00039961s, waiting for 1m20s Oct 23 04:43:37.754: INFO: node status heartbeat is unchanged for 1.999054894s, waiting for 1m20s Oct 23 04:43:38.756: INFO: node status heartbeat is unchanged for 3.000923018s, waiting for 1m20s Oct 23 04:43:39.755: INFO: node status heartbeat is unchanged for 4.000291831s, waiting for 1m20s Oct 23 04:43:40.756: INFO: node status heartbeat is unchanged for 5.001388714s, waiting for 1m20s Oct 23 04:43:41.755: INFO: node status heartbeat is unchanged for 6.000022594s, waiting for 1m20s Oct 23 04:43:42.755: INFO: node status heartbeat is unchanged for 7.000031542s, waiting for 1m20s Oct 23 04:43:43.756: INFO: node status heartbeat is unchanged for 8.001061091s, waiting for 1m20s Oct 23 04:43:44.757: INFO: node status heartbeat is unchanged for 9.00277822s, waiting for 1m20s Oct 23 04:43:45.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:43:45.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:45 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:43:46.756: INFO: node status heartbeat is unchanged for 1.001352389s, waiting for 1m20s Oct 23 04:43:47.755: INFO: node status heartbeat is unchanged for 1.999754463s, waiting for 1m20s Oct 23 04:43:48.755: INFO: node status heartbeat is unchanged for 3.000279066s, waiting for 1m20s Oct 23 04:43:49.756: INFO: node status heartbeat is unchanged for 4.000615348s, waiting for 1m20s Oct 23 04:43:50.755: INFO: node status heartbeat is unchanged for 4.999870591s, waiting for 1m20s Oct 23 04:43:51.755: INFO: node status heartbeat is unchanged for 6.000164449s, waiting for 1m20s Oct 23 04:43:52.755: INFO: node status heartbeat is unchanged for 6.999892089s, waiting for 1m20s Oct 23 04:43:53.755: INFO: node status heartbeat is unchanged for 7.999737726s, waiting for 1m20s Oct 23 04:43:54.754: INFO: node status heartbeat is unchanged for 8.999122004s, waiting for 1m20s Oct 23 04:43:55.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:43:55.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:45 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:55 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:43:56.755: INFO: node status heartbeat is unchanged for 998.962476ms, waiting for 1m20s Oct 23 04:43:57.754: INFO: node status heartbeat is unchanged for 1.998589352s, waiting for 1m20s Oct 23 04:43:58.754: INFO: node status heartbeat is unchanged for 2.998285736s, waiting for 1m20s Oct 23 04:43:59.755: INFO: node status heartbeat is unchanged for 3.998801917s, waiting for 1m20s Oct 23 04:44:00.754: INFO: node status heartbeat is unchanged for 4.998225048s, waiting for 1m20s Oct 23 04:44:01.755: INFO: node status heartbeat is unchanged for 5.998849827s, waiting for 1m20s Oct 23 04:44:02.756: INFO: node status heartbeat is unchanged for 7.000071406s, waiting for 1m20s Oct 23 04:44:03.756: INFO: node status heartbeat is unchanged for 7.999852578s, waiting for 1m20s Oct 23 04:44:04.756: INFO: node status heartbeat is unchanged for 8.999647713s, waiting for 1m20s Oct 23 04:44:05.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:44:05.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:43:55 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:05 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:44:06.757: INFO: node status heartbeat is unchanged for 1.002916198s, waiting for 1m20s Oct 23 04:44:07.755: INFO: node status heartbeat is unchanged for 2.000227371s, waiting for 1m20s Oct 23 04:44:08.756: INFO: node status heartbeat is unchanged for 3.001458215s, waiting for 1m20s Oct 23 04:44:09.755: INFO: node status heartbeat is unchanged for 4.000477584s, waiting for 1m20s Oct 23 04:44:10.756: INFO: node status heartbeat is unchanged for 5.001484615s, waiting for 1m20s Oct 23 04:44:11.757: INFO: node status heartbeat is unchanged for 6.002194502s, waiting for 1m20s Oct 23 04:44:12.754: INFO: node status heartbeat is unchanged for 6.999906155s, waiting for 1m20s Oct 23 04:44:13.755: INFO: node status heartbeat is unchanged for 8.000943686s, waiting for 1m20s Oct 23 04:44:14.756: INFO: node status heartbeat is unchanged for 9.001379576s, waiting for 1m20s Oct 23 04:44:15.758: INFO: node status heartbeat is unchanged for 10.003634663s, waiting for 1m20s Oct 23 04:44:16.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:44:16.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:05 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:15 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:44:17.754: INFO: node status heartbeat is unchanged for 998.487549ms, waiting for 1m20s Oct 23 04:44:18.757: INFO: node status heartbeat is unchanged for 2.000840331s, waiting for 1m20s Oct 23 04:44:19.755: INFO: node status heartbeat is unchanged for 2.999239638s, waiting for 1m20s Oct 23 04:44:20.757: INFO: node status heartbeat is unchanged for 4.00142328s, waiting for 1m20s Oct 23 04:44:21.756: INFO: node status heartbeat is unchanged for 5.000226674s, waiting for 1m20s Oct 23 04:44:22.754: INFO: node status heartbeat is unchanged for 5.998661028s, waiting for 1m20s Oct 23 04:44:23.755: INFO: node status heartbeat is unchanged for 6.998987719s, waiting for 1m20s Oct 23 04:44:24.755: INFO: node status heartbeat is unchanged for 7.999035561s, waiting for 1m20s Oct 23 04:44:25.755: INFO: node status heartbeat is unchanged for 8.999764009s, waiting for 1m20s Oct 23 04:44:26.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:44:26.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:15 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:25 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:44:27.755: INFO: node status heartbeat is unchanged for 1.000433316s, waiting for 1m20s Oct 23 04:44:28.754: INFO: node status heartbeat is unchanged for 1.99913599s, waiting for 1m20s Oct 23 04:44:29.756: INFO: node status heartbeat is unchanged for 3.001585286s, waiting for 1m20s Oct 23 04:44:30.755: INFO: node status heartbeat is unchanged for 4.000590236s, waiting for 1m20s Oct 23 04:44:31.757: INFO: node status heartbeat is unchanged for 5.002179468s, waiting for 1m20s Oct 23 04:44:32.755: INFO: node status heartbeat is unchanged for 6.000005319s, waiting for 1m20s Oct 23 04:44:33.756: INFO: node status heartbeat is unchanged for 7.001111821s, waiting for 1m20s Oct 23 04:44:34.758: INFO: node status heartbeat is unchanged for 8.002898653s, waiting for 1m20s Oct 23 04:44:35.758: INFO: node status heartbeat is unchanged for 9.003034246s, waiting for 1m20s Oct 23 04:44:36.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:44:36.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:25 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:35 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:44:37.755: INFO: node status heartbeat is unchanged for 999.719675ms, waiting for 1m20s Oct 23 04:44:38.756: INFO: node status heartbeat is unchanged for 2.000373472s, waiting for 1m20s Oct 23 04:44:39.755: INFO: node status heartbeat is unchanged for 2.999320429s, waiting for 1m20s Oct 23 04:44:40.755: INFO: node status heartbeat is unchanged for 3.99963804s, waiting for 1m20s Oct 23 04:44:41.756: INFO: node status heartbeat is unchanged for 5.000057211s, waiting for 1m20s Oct 23 04:44:42.755: INFO: node status heartbeat is unchanged for 5.999155542s, waiting for 1m20s Oct 23 04:44:43.756: INFO: node status heartbeat is unchanged for 7.000785912s, waiting for 1m20s Oct 23 04:44:44.756: INFO: node status heartbeat is unchanged for 7.999976827s, waiting for 1m20s Oct 23 04:44:45.757: INFO: node status heartbeat is unchanged for 9.00096349s, waiting for 1m20s Oct 23 04:44:46.756: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:44:46.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:46 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:46 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:35 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:46 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:44:47.756: INFO: node status heartbeat is unchanged for 999.523032ms, waiting for 1m20s Oct 23 04:44:48.757: INFO: node status heartbeat is unchanged for 2.000256175s, waiting for 1m20s Oct 23 04:44:49.755: INFO: node status heartbeat is unchanged for 2.998657576s, waiting for 1m20s Oct 23 04:44:50.756: INFO: node status heartbeat is unchanged for 3.999539549s, waiting for 1m20s Oct 23 04:44:51.755: INFO: node status heartbeat is unchanged for 4.998913763s, waiting for 1m20s Oct 23 04:44:52.756: INFO: node status heartbeat is unchanged for 5.999422081s, waiting for 1m20s Oct 23 04:44:53.756: INFO: node status heartbeat is unchanged for 7.000098368s, waiting for 1m20s Oct 23 04:44:54.757: INFO: node status heartbeat is unchanged for 8.000838284s, waiting for 1m20s Oct 23 04:44:55.755: INFO: node status heartbeat is unchanged for 8.998955416s, waiting for 1m20s Oct 23 04:44:56.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:44:56.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:46 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:56 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:46 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:56 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:46 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:56 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:44:57.756: INFO: node status heartbeat is unchanged for 1.000389734s, waiting for 1m20s Oct 23 04:44:58.755: INFO: node status heartbeat is unchanged for 1.999157169s, waiting for 1m20s Oct 23 04:44:59.755: INFO: node status heartbeat is unchanged for 2.999393655s, waiting for 1m20s Oct 23 04:45:00.755: INFO: node status heartbeat is unchanged for 3.999055918s, waiting for 1m20s Oct 23 04:45:01.755: INFO: node status heartbeat is unchanged for 4.998846875s, waiting for 1m20s Oct 23 04:45:02.754: INFO: node status heartbeat is unchanged for 5.998320785s, waiting for 1m20s Oct 23 04:45:03.756: INFO: node status heartbeat is unchanged for 6.999514414s, waiting for 1m20s Oct 23 04:45:04.756: INFO: node status heartbeat is unchanged for 7.999625592s, waiting for 1m20s Oct 23 04:45:05.756: INFO: node status heartbeat is unchanged for 8.999627005s, waiting for 1m20s Oct 23 04:45:06.755: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:45:06.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:56 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:06 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:56 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:06 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:44:56 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:06 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:45:07.755: INFO: node status heartbeat is unchanged for 1.000004815s, waiting for 1m20s Oct 23 04:45:08.757: INFO: node status heartbeat is unchanged for 2.001750967s, waiting for 1m20s Oct 23 04:45:09.754: INFO: node status heartbeat is unchanged for 2.999077589s, waiting for 1m20s Oct 23 04:45:10.754: INFO: node status heartbeat is unchanged for 3.999077851s, waiting for 1m20s Oct 23 04:45:11.756: INFO: node status heartbeat is unchanged for 5.000774045s, waiting for 1m20s Oct 23 04:45:12.755: INFO: node status heartbeat is unchanged for 5.99979605s, waiting for 1m20s Oct 23 04:45:13.755: INFO: node status heartbeat is unchanged for 7.000155372s, waiting for 1m20s Oct 23 04:45:14.755: INFO: node status heartbeat is unchanged for 7.999476297s, waiting for 1m20s Oct 23 04:45:15.755: INFO: node status heartbeat is unchanged for 8.99968176s, waiting for 1m20s Oct 23 04:45:16.757: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:45:16.762: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:06 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:16 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:06 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:16 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:06 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:16 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:45:17.754: INFO: node status heartbeat is unchanged for 997.398623ms, waiting for 1m20s Oct 23 04:45:18.754: INFO: node status heartbeat is unchanged for 1.997600984s, waiting for 1m20s Oct 23 04:45:19.755: INFO: node status heartbeat is unchanged for 2.998282424s, waiting for 1m20s Oct 23 04:45:20.758: INFO: node status heartbeat is unchanged for 4.001348883s, waiting for 1m20s Oct 23 04:45:21.756: INFO: node status heartbeat is unchanged for 4.999611005s, waiting for 1m20s Oct 23 04:45:22.756: INFO: node status heartbeat is unchanged for 5.999232416s, waiting for 1m20s Oct 23 04:45:23.759: INFO: node status heartbeat is unchanged for 7.001921803s, waiting for 1m20s Oct 23 04:45:24.757: INFO: node status heartbeat is unchanged for 8.000466966s, waiting for 1m20s Oct 23 04:45:25.756: INFO: node status heartbeat is unchanged for 8.999526356s, waiting for 1m20s Oct 23 04:45:26.757: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:45:26.762: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:16 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:26 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:16 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:26 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:16 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:26 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:45:27.755: INFO: node status heartbeat is unchanged for 998.342838ms, waiting for 1m20s Oct 23 04:45:28.756: INFO: node status heartbeat is unchanged for 1.999507569s, waiting for 1m20s Oct 23 04:45:29.755: INFO: node status heartbeat is unchanged for 2.99843815s, waiting for 1m20s Oct 23 04:45:30.759: INFO: node status heartbeat is unchanged for 4.001641861s, waiting for 1m20s Oct 23 04:45:31.756: INFO: node status heartbeat is unchanged for 4.99943916s, waiting for 1m20s Oct 23 04:45:32.755: INFO: node status heartbeat is unchanged for 5.998013456s, waiting for 1m20s Oct 23 04:45:33.757: INFO: node status heartbeat is unchanged for 7.000437236s, waiting for 1m20s Oct 23 04:45:34.757: INFO: node status heartbeat is unchanged for 7.999849639s, waiting for 1m20s Oct 23 04:45:35.754: INFO: node status heartbeat is unchanged for 8.997628807s, waiting for 1m20s Oct 23 04:45:36.757: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:45:36.762: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:26 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:36 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:26 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:36 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:26 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:36 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:45:37.755: INFO: node status heartbeat is unchanged for 997.846102ms, waiting for 1m20s Oct 23 04:45:38.758: INFO: node status heartbeat is unchanged for 2.001240399s, waiting for 1m20s Oct 23 04:45:39.755: INFO: node status heartbeat is unchanged for 2.997876896s, waiting for 1m20s Oct 23 04:45:40.757: INFO: node status heartbeat is unchanged for 4.000367294s, waiting for 1m20s Oct 23 04:45:41.756: INFO: node status heartbeat is unchanged for 4.999642041s, waiting for 1m20s Oct 23 04:45:42.755: INFO: node status heartbeat is unchanged for 5.998228635s, waiting for 1m20s Oct 23 04:45:43.756: INFO: node status heartbeat is unchanged for 6.999730858s, waiting for 1m20s Oct 23 04:45:44.758: INFO: node status heartbeat is unchanged for 8.001663076s, waiting for 1m20s Oct 23 04:45:45.756: INFO: node status heartbeat is unchanged for 8.998844206s, waiting for 1m20s Oct 23 04:45:46.758: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:45:46.762: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:36 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:46 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:36 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:46 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:36 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:46 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:45:47.755: INFO: node status heartbeat is unchanged for 997.050396ms, waiting for 1m20s Oct 23 04:45:48.758: INFO: node status heartbeat is unchanged for 1.99979727s, waiting for 1m20s Oct 23 04:45:49.756: INFO: node status heartbeat is unchanged for 2.997884513s, waiting for 1m20s Oct 23 04:45:50.756: INFO: node status heartbeat is unchanged for 3.997912424s, waiting for 1m20s Oct 23 04:45:51.756: INFO: node status heartbeat is unchanged for 4.997769765s, waiting for 1m20s Oct 23 04:45:52.755: INFO: node status heartbeat is unchanged for 5.997539787s, waiting for 1m20s Oct 23 04:45:53.756: INFO: node status heartbeat is unchanged for 6.997998913s, waiting for 1m20s Oct 23 04:45:54.756: INFO: node status heartbeat is unchanged for 7.998606433s, waiting for 1m20s Oct 23 04:45:55.756: INFO: node status heartbeat is unchanged for 8.998269262s, waiting for 1m20s Oct 23 04:45:56.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:45:56.760: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:46 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:56 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:46 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:56 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:46 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:56 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:45:57.755: INFO: node status heartbeat is unchanged for 999.945219ms, waiting for 1m20s Oct 23 04:45:58.757: INFO: node status heartbeat is unchanged for 2.00184133s, waiting for 1m20s Oct 23 04:45:59.757: INFO: node status heartbeat is unchanged for 3.0015696s, waiting for 1m20s Oct 23 04:46:00.758: INFO: node status heartbeat is unchanged for 4.002196925s, waiting for 1m20s Oct 23 04:46:01.760: INFO: node status heartbeat is unchanged for 5.004128747s, waiting for 1m20s Oct 23 04:46:02.755: INFO: node status heartbeat is unchanged for 5.99943939s, waiting for 1m20s Oct 23 04:46:03.756: INFO: node status heartbeat is unchanged for 7.000450347s, waiting for 1m20s Oct 23 04:46:04.755: INFO: node status heartbeat is unchanged for 7.999432389s, waiting for 1m20s Oct 23 04:46:05.756: INFO: node status heartbeat is unchanged for 9.000836373s, waiting for 1m20s Oct 23 04:46:06.756: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s Oct 23 04:46:06.761: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:56 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:06 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:56 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:06 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:45:56 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:06 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:46:07.755: INFO: node status heartbeat is unchanged for 998.722991ms, waiting for 1m20s Oct 23 04:46:08.758: INFO: node status heartbeat is unchanged for 2.001621223s, waiting for 1m20s Oct 23 04:46:09.754: INFO: node status heartbeat is unchanged for 2.998030846s, waiting for 1m20s Oct 23 04:46:10.755: INFO: node status heartbeat is unchanged for 3.998432069s, waiting for 1m20s Oct 23 04:46:11.755: INFO: node status heartbeat is unchanged for 4.999055671s, waiting for 1m20s Oct 23 04:46:12.756: INFO: node status heartbeat is unchanged for 5.999261482s, waiting for 1m20s Oct 23 04:46:13.757: INFO: node status heartbeat is unchanged for 7.000955669s, waiting for 1m20s Oct 23 04:46:14.757: INFO: node status heartbeat is unchanged for 8.000602832s, waiting for 1m20s Oct 23 04:46:15.757: INFO: node status heartbeat is unchanged for 9.000411958s, waiting for 1m20s Oct 23 04:46:16.756: INFO: node status heartbeat is unchanged for 9.999228695s, waiting for 1m20s Oct 23 04:46:17.754: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s Oct 23 04:46:17.759: INFO:   v1.NodeStatus{    Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "439913340Ki", Format: "BinarySI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "405424133473", Format: "DecimalSI"}, s"example.com/fakePTSRes": {i: {...}, s: "10", Format: "DecimalSI"}, ...},    Phase: "",    Conditions: []v1.NodeCondition{    {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, LastTransitionTime: {Time: s"2021-10-22 21:09:10 +0000 UTC"}, ...},    {    Type: "MemoryPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:06 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:17 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientMemory",    Message: "kubelet has sufficient memory available",    },    {    Type: "DiskPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:06 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:17 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasNoDiskPressure",    Message: "kubelet has no disk pressure",    },    {    Type: "PIDPressure",    Status: "False", -  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:06 +0000 UTC"}, +  LastHeartbeatTime: v1.Time{Time: s"2021-10-23 04:46:17 +0000 UTC"},    LastTransitionTime: {Time: s"2021-10-22 21:05:23 +0000 UTC"},    Reason: "KubeletHasSufficientPID",    Message: "kubelet has sufficient PID available",    },    {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2021-10-22 21:06:32 +0000 UTC"}, Reason: "KubeletReady", ...},    },    Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}},    DaemonEndpoints: {KubeletEndpoint: {Port: 10250}},    ... // 5 identical fields   } Oct 23 04:46:18.757: INFO: node status heartbeat is unchanged for 1.00246771s, waiting for 1m20s Oct 23 04:46:19.756: INFO: node status heartbeat is unchanged for 2.002193279s, waiting for 1m20s Oct 23 04:46:20.755: INFO: node status heartbeat is unchanged for 3.000684712s, waiting for 1m20s Oct 23 04:46:20.757: INFO: node status heartbeat is unchanged for 3.003237883s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 04:46:20.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-317" for this suite. • [SLOW TEST:300.052 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":2,"skipped":13,"failed":0} Oct 23 04:46:20.778: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Oct 23 04:40:30.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 Oct 23 04:40:30.624: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:32.627: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:34.628: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:36.626: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:38.627: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:40.628: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Oct 23 04:40:42.627: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Oct 23 04:52:07.995: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-10-23 04:47:04 +0000 UTC restartedAt=2021-10-23 04:52:06 +0000 UTC (5m2s) Oct 23 04:57:18.313: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-10-23 04:52:11 +0000 UTC restartedAt=2021-10-23 04:57:17 +0000 UTC (5m6s) Oct 23 05:02:35.706: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-10-23 04:57:22 +0000 UTC restartedAt=2021-10-23 05:02:34 +0000 UTC (5m12s) STEP: getting restart delay after a capped delay Oct 23 05:07:52.159: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-10-23 05:02:39 +0000 UTC restartedAt=2021-10-23 05:07:50 +0000 UTC (5m11s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Oct 23 05:07:52.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3872" for this suite. • [SLOW TEST:1641.582 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":3,"skipped":334,"failed":0} Oct 23 05:07:52.172: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":8,"skipped":640,"failed":0} Oct 23 04:42:20.099: INFO: Running AfterSuite actions on all nodes Oct 23 05:07:52.194: INFO: Running AfterSuite actions on node 1 Oct 23 05:07:52.194: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Panic!] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 Ran 53 of 5770 Specs in 1674.893 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5717 Skipped Ginkgo ran 1 suite in 27m56.446433366s Test Suite Failed