Running Suite: Kubernetes e2e suite =================================== Random Seed: 1653089259 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 20 23:27:41.427: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.433: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 20 23:27:41.462: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 20 23:27:41.535: INFO: The status of Pod cmk-init-discover-node1-vkzkd is Succeeded, skipping waiting May 20 23:27:41.535: INFO: The status of Pod cmk-init-discover-node2-b7gw4 is Succeeded, skipping waiting May 20 23:27:41.535: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 20 23:27:41.535: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 20 23:27:41.535: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 20 23:27:41.552: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 20 23:27:41.553: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 20 23:27:41.553: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 20 23:27:41.553: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 20 23:27:41.553: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 20 23:27:41.553: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 20 23:27:41.553: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 20 23:27:41.553: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 20 23:27:41.553: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 20 23:27:41.553: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 20 23:27:41.553: INFO: e2e test version: v1.21.9 May 20 23:27:41.554: INFO: kube-apiserver version: v1.21.1 May 20 23:27:41.554: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.561: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 20 23:27:41.556: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.574: INFO: Cluster IP family: ipv4 S ------------------------------ May 20 23:27:41.557: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.579: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ May 20 23:27:41.564: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.587: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 20 23:27:41.570: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.590: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ May 20 23:27:41.573: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.594: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 20 23:27:41.594: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.615: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 20 23:27:41.595: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.618: INFO: Cluster IP family: ipv4 SSS ------------------------------ May 20 23:27:41.599: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.619: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ May 20 23:27:41.602: INFO: >>> kubeConfig: /root/.kube/config May 20 23:27:41.623: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl W0520 23:27:41.750186 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:41.750: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:41.753: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 May 20 23:27:41.755: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:27:41.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-1094" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.043 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test W0520 23:27:41.802491 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:41.802: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:41.804: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:27:41.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3223" for this suite. •SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":1,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api W0520 23:27:41.771789 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:41.772: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:41.773: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars May 20 23:27:41.791: INFO: Waiting up to 5m0s for pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45" in namespace "downward-api-3320" to be "Succeeded or Failed" May 20 23:27:41.794: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062717ms May 20 23:27:43.799: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988231s May 20 23:27:45.804: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012328905s May 20 23:27:47.808: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016481538s May 20 23:27:49.813: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021897561s May 20 23:27:51.817: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025047278s STEP: Saw pod success May 20 23:27:51.817: INFO: Pod "downward-api-231991e9-d325-4671-b01e-15dbb81c7c45" satisfied condition "Succeeded or Failed" May 20 23:27:51.819: INFO: Trying to get logs from node node2 pod downward-api-231991e9-d325-4671-b01e-15dbb81c7c45 container dapi-container: STEP: delete the pod May 20 23:27:51.831: INFO: Waiting for pod downward-api-231991e9-d325-4671-b01e-15dbb81c7c45 to disappear May 20 23:27:51.833: INFO: Pod downward-api-231991e9-d325-4671-b01e-15dbb81c7c45 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:27:51.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3320" for this suite. • [SLOW TEST:10.099 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":1,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups May 20 23:27:41.923: INFO: Waiting up to 5m0s for pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231" in namespace "security-context-808" to be "Succeeded or Failed" May 20 23:27:41.926: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148163ms May 20 23:27:43.929: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005439884s May 20 23:27:45.933: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009982938s May 20 23:27:47.940: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017003522s May 20 23:27:49.944: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Pending", Reason="", readiness=false. Elapsed: 8.020613031s May 20 23:27:51.948: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0248099s May 20 23:27:53.954: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030235276s STEP: Saw pod success May 20 23:27:53.954: INFO: Pod "security-context-ee79d9f8-6522-4db0-bed5-23f382952231" satisfied condition "Succeeded or Failed" May 20 23:27:53.957: INFO: Trying to get logs from node node1 pod security-context-ee79d9f8-6522-4db0-bed5-23f382952231 container test-container: STEP: delete the pod May 20 23:27:53.970: INFO: Waiting for pod security-context-ee79d9f8-6522-4db0-bed5-23f382952231 to disappear May 20 23:27:53.972: INFO: Pod security-context-ee79d9f8-6522-4db0-bed5-23f382952231 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:27:53.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-808" for this suite. • [SLOW TEST:12.092 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:42.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime W0520 23:27:42.057410 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:42.057: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:42.059: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:27:57.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2003" for this suite. • [SLOW TEST:15.113 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":1,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:42.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0520 23:27:42.088402 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:42.088: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:42.090: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 May 20 23:27:42.104: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db" in namespace "security-context-test-4707" to be "Succeeded or Failed" May 20 23:27:42.106: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258358ms May 20 23:27:44.112: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008089817s May 20 23:27:46.115: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011254904s May 20 23:27:48.124: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020517659s May 20 23:27:50.128: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023966495s May 20 23:27:52.131: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02759298s May 20 23:27:54.138: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 12.034574878s May 20 23:27:56.145: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Pending", Reason="", readiness=false. Elapsed: 14.041593535s May 20 23:27:58.150: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.046519556s May 20 23:27:58.150: INFO: Pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db" satisfied condition "Succeeded or Failed" May 20 23:27:58.157: INFO: Got logs for pod "busybox-privileged-true-76053495-af35-497f-896f-5f2cd76919db": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:27:58.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4707" for this suite. • [SLOW TEST:16.103 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:52.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 20 23:27:52.226: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod May 20 23:27:52.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3681 create -f -' May 20 23:27:52.839: INFO: stderr: "" May 20 23:27:52.839: INFO: stdout: "secret/test-secret created\n" May 20 23:27:52.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3681 create -f -' May 20 23:27:53.211: INFO: stderr: "" May 20 23:27:53.211: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 20 23:28:03.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3681 logs secret-test-pod test-container' May 20 23:28:03.381: INFO: stderr: "" May 20 23:28:03.381: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:03.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3681" for this suite. • [SLOW TEST:11.195 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":2,"skipped":219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0520 23:27:41.959705 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:41.960: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:41.961: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-a27d64ed-d789-4686-b944-b871a3a43325 in namespace container-probe-2737 May 20 23:28:01.981: INFO: Started pod startup-override-a27d64ed-d789-4686-b944-b871a3a43325 in namespace container-probe-2737 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:28:01.984: INFO: Initial restart count of pod startup-override-a27d64ed-d789-4686-b944-b871a3a43325 is 0 May 20 23:28:05.996: INFO: Restart count of pod container-probe-2737/startup-override-a27d64ed-d789-4686-b944-b871a3a43325 is now 1 (4.012579061s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:06.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2737" for this suite. • [SLOW TEST:24.075 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":1,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:54.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container May 20 23:27:54.095: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:56.100: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:58.100: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:00.099: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:02.100: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:04.102: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:06.099: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container May 20 23:28:06.101: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2907 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:28:06.101: INFO: >>> kubeConfig: /root/.kube/config May 20 23:28:06.467: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-2907 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:28:06.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 20 23:28:06.745: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2907 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:28:06.745: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:06.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-2907" for this suite. • [SLOW TEST:12.808 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":2,"skipped":107,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:58.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 May 20 23:27:58.615: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-7951" to be "Succeeded or Failed" May 20 23:27:58.618: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005302ms May 20 23:28:00.623: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007045823s May 20 23:28:02.629: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013907214s May 20 23:28:04.636: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.020741454s May 20 23:28:06.640: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.024250948s May 20 23:28:08.644: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.028813232s May 20 23:28:08.644: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:08.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7951" for this suite. • [SLOW TEST:10.076 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":2,"skipped":361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:06.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:10.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6876" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":3,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:06.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes May 20 23:28:06.286: INFO: Waiting up to 5m0s for pod "pod-always-succeed1100899a-eb97-413d-8e25-ae50fca1840d" in namespace "pods-2013" to be "Succeeded or Failed" May 20 23:28:06.288: INFO: Pod "pod-always-succeed1100899a-eb97-413d-8e25-ae50fca1840d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068899ms May 20 23:28:08.294: INFO: Pod "pod-always-succeed1100899a-eb97-413d-8e25-ae50fca1840d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007552482s May 20 23:28:10.298: INFO: Pod "pod-always-succeed1100899a-eb97-413d-8e25-ae50fca1840d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011528418s May 20 23:28:12.302: INFO: Pod "pod-always-succeed1100899a-eb97-413d-8e25-ae50fca1840d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01565861s STEP: Saw pod success May 20 23:28:12.302: INFO: Pod "pod-always-succeed1100899a-eb97-413d-8e25-ae50fca1840d" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:14.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2013" for this suite. • [SLOW TEST:8.074 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":2,"skipped":209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:09.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:15.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4256" for this suite. • [SLOW TEST:6.084 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":3,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:03.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 20 23:28:14.610: INFO: start=2022-05-20 23:28:09.566583299 +0000 UTC m=+29.729226890, now=2022-05-20 23:28:14.610415909 +0000 UTC m=+34.773059481, kubelet pod: {"metadata":{"name":"pod-submit-remove-ddd35dde-27a8-4210-8f26-a7c4a2c9fca6","namespace":"pods-4636","uid":"42b4b2b0-d0e7-483b-bac2-59d461953eb6","resourceVersion":"77595","creationTimestamp":"2022-05-20T23:28:03Z","deletionTimestamp":"2022-05-20T23:28:39Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"531304845"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.241\"\n ],\n \"mac\": \"ee:1b:cb:b5:f3:4b\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.241\"\n ],\n \"mac\": \"ee:1b:cb:b5:f3:4b\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-05-20T23:28:03.930958396Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-05-20T23:28:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-hrtxd","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-hrtxd","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node1","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-05-20T23:28:03Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-05-20T23:28:12Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-05-20T23:28:12Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-05-20T23:28:03Z"}],"hostIP":"10.10.190.207","startTime":"2022-05-20T23:28:03Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"","started":false}],"qosClass":"BestEffort"}} May 20 23:28:19.587: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:19.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4636" for this suite. • [SLOW TEST:16.091 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":3,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0520 23:27:41.753039 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:41.753: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:41.755: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-8310a72c-067c-46d0-aa14-698dd0ea20f4 in namespace container-probe-9939 May 20 23:27:53.781: INFO: Started pod liveness-8310a72c-067c-46d0-aa14-698dd0ea20f4 in namespace container-probe-9939 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:27:53.784: INFO: Initial restart count of pod liveness-8310a72c-067c-46d0-aa14-698dd0ea20f4 is 0 May 20 23:28:19.853: INFO: Restart count of pod container-probe-9939/liveness-8310a72c-067c-46d0-aa14-698dd0ea20f4 is now 1 (26.068553495s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:19.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9939" for this suite. • [SLOW TEST:38.137 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":1,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:20.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 May 20 23:28:20.058: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:20.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-8022" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:11.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label fizz-1e0ba9b9-f619-4262-a82a-2ce91126565d buzz STEP: verifying the node has the label foo-3a955e8a-b68e-44e2-a4da-e7e8d30948be bar STEP: Trying to create runtimeclass and pod STEP: removing the label foo-3a955e8a-b68e-44e2-a4da-e7e8d30948be off the node node1 STEP: verifying the node doesn't have the label foo-3a955e8a-b68e-44e2-a4da-e7e8d30948be STEP: removing the label fizz-1e0ba9b9-f619-4262-a82a-2ce91126565d off the node node1 STEP: verifying the node doesn't have the label fizz-1e0ba9b9-f619-4262-a82a-2ce91126565d [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:29.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-7267" for this suite. • [SLOW TEST:18.129 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":4,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:19.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 May 20 23:28:19.733: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-2769" to be "Succeeded or Failed" May 20 23:28:19.737: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.505642ms May 20 23:28:21.740: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007740912s May 20 23:28:23.745: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01258381s May 20 23:28:25.749: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016354316s May 20 23:28:27.754: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021754219s May 20 23:28:29.759: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.026467202s May 20 23:28:29.759: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:29.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2769" for this suite. • [SLOW TEST:10.077 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":4,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:42.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet W0520 23:27:42.135229 36 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:42.135: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:42.137: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 in namespace kubelet-5895 I0520 23:27:42.170765 36 runners.go:190] Created replication controller with name: cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8, namespace: kubelet-5895, replica count: 20 I0520 23:27:52.221492 36 runners.go:190] cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:28:02.221891 36 runners.go:190] cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 Pods: 20 out of 20 created, 16 running, 4 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0520 23:28:12.222629 36 runners.go:190] cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 20 23:28:13.223: INFO: Checking pods on node node1 via /runningpods endpoint May 20 23:28:13.223: INFO: Checking pods on node node2 via /runningpods endpoint May 20 23:28:13.245: INFO: Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.398 4848.25 1683.84 "runtime" 0.128 725.16 317.98 "kubelet" 0.128 725.16 317.98 Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.539 3833.27 1702.98 "runtime" 0.113 575.16 242.31 "kubelet" 0.113 575.16 242.31 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.370 3512.34 1512.63 "runtime" 0.101 518.50 235.89 "kubelet" 0.101 518.50 235.89 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.988 6398.61 2363.11 "runtime" 1.301 2581.53 549.32 "kubelet" 1.301 2581.53 549.32 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.534 3946.08 1149.73 "runtime" 1.056 1447.52 529.77 "kubelet" 1.056 1447.52 529.77 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 in namespace kubelet-5895, will wait for the garbage collector to delete the pods May 20 23:28:13.302: INFO: Deleting ReplicationController cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 took: 4.117452ms May 20 23:28:13.903: INFO: Terminating ReplicationController cleanup20-5ecc5f2d-adcf-4550-8130-0852f734f5f8 pods took: 600.254379ms May 20 23:28:30.503: INFO: Checking pods on node node2 via /runningpods endpoint May 20 23:28:30.503: INFO: Checking pods on node node1 via /runningpods endpoint May 20 23:28:30.520: INFO: Deleting 20 pods on 2 nodes completed in 1.01708858s after the RC was deleted May 20 23:28:30.520: INFO: CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.351 0.351 0.370 0.370 0.370 "runtime" 0.000 0.000 0.101 0.110 0.110 0.110 0.110 "kubelet" 0.000 0.000 0.101 0.110 0.110 0.110 0.110 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.988 1.988 1.992 1.992 1.992 "runtime" 0.000 0.000 0.379 0.379 0.379 0.379 0.379 "kubelet" 0.000 0.000 0.379 0.379 0.379 0.379 0.379 CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.534 1.534 1.726 1.726 1.726 "runtime" 0.000 0.000 0.462 0.462 0.720 0.720 0.720 "kubelet" 0.000 0.000 0.462 0.462 0.720 0.720 0.720 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.398 0.398 0.443 0.443 0.443 "runtime" 0.000 0.000 0.117 0.117 0.117 0.117 0.117 "kubelet" 0.000 0.000 0.117 0.117 0.117 0.117 0.117 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.363 0.468 0.497 0.539 0.539 0.539 "runtime" 0.000 0.000 0.101 0.108 0.108 0.108 0.108 "kubelet" 0.000 0.000 0.101 0.108 0.108 0.108 0.108 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:30.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-5895" for this suite. • [SLOW TEST:48.443 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":1,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:29.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 20 23:28:34.897: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:34.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3333" for this suite. • [SLOW TEST:5.086 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":5,"skipped":355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:30.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:35.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8624" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":2,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:35.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:37.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5309" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":3,"skipped":532,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:15.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:37.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5093" for this suite. • [SLOW TEST:22.076 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":4,"skipped":615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:37.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:37.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3781" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":4,"skipped":623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:35.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod May 20 23:28:35.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7585 create -f -' May 20 23:28:35.657: INFO: stderr: "" May 20 23:28:35.657: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 20 23:28:39.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7585 logs dapi-test-pod test-container' May 20 23:28:39.841: INFO: stderr: "" May 20 23:28:39.841: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-7585\nMY_POD_IP=10.244.4.250\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 20 23:28:39.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7585 logs dapi-test-pod test-container' May 20 23:28:40.025: INFO: stderr: "" May 20 23:28:40.025: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-7585\nMY_POD_IP=10.244.4.250\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.207\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:40.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7585" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":6,"skipped":467,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:37.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 20 23:28:37.866: INFO: Waiting up to 5m0s for pod "security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7" in namespace "security-context-6546" to be "Succeeded or Failed" May 20 23:28:37.868: INFO: Pod "security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7": Phase="Pending", Reason="", readiness=false. Elapsed: 1.929382ms May 20 23:28:39.872: INFO: Pod "security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005695948s May 20 23:28:41.876: INFO: Pod "security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009545799s STEP: Saw pod success May 20 23:28:41.876: INFO: Pod "security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7" satisfied condition "Succeeded or Failed" May 20 23:28:41.879: INFO: Trying to get logs from node node2 pod security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7 container test-container: STEP: delete the pod May 20 23:28:41.891: INFO: Waiting for pod security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7 to disappear May 20 23:28:41.894: INFO: Pod security-context-4c3ec5fc-0edd-4be9-8e68-afa4dfb654c7 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:41.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6546" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":5,"skipped":792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:20.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 May 20 23:28:42.130: INFO: The status of Pod startup-ba509ccd-3e14-4fe1-a883-a42978b370d7 is Running (Ready = true) May 20 23:28:42.132: INFO: Container started at 2022-05-20 23:28:42.126120447 +0000 UTC m=+62.290788522, pod became ready at 2022-05-20 23:28:42.130286062 +0000 UTC m=+62.294954044, 4.165522ms after startupProbe succeeded [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:42.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2996" for this suite. • [SLOW TEST:22.059 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":2,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0520 23:27:41.778901 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:41.779: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:41.781: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-01555b09-f299-44f3-9885-8bc96648bc2c in namespace container-probe-6656 May 20 23:27:51.802: INFO: Started pod startup-01555b09-f299-44f3-9885-8bc96648bc2c in namespace container-probe-6656 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:27:51.804: INFO: Initial restart count of pod startup-01555b09-f299-44f3-9885-8bc96648bc2c is 0 May 20 23:28:43.916: INFO: Restart count of pod container-probe-6656/startup-01555b09-f299-44f3-9885-8bc96648bc2c is now 1 (52.111001918s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:43.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6656" for this suite. • [SLOW TEST:62.184 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":1,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:40.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 May 20 23:28:40.110: INFO: Waiting up to 5m0s for pod "busybox-user-0-b307ba42-0973-4d55-b5f4-7b11a3e3e2e8" in namespace "security-context-test-6039" to be "Succeeded or Failed" May 20 23:28:40.112: INFO: Pod "busybox-user-0-b307ba42-0973-4d55-b5f4-7b11a3e3e2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097037ms May 20 23:28:42.116: INFO: Pod "busybox-user-0-b307ba42-0973-4d55-b5f4-7b11a3e3e2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006386712s May 20 23:28:44.124: INFO: Pod "busybox-user-0-b307ba42-0973-4d55-b5f4-7b11a3e3e2e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013802603s May 20 23:28:44.124: INFO: Pod "busybox-user-0-b307ba42-0973-4d55-b5f4-7b11a3e3e2e8" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:44.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6039" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":7,"skipped":486,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:41.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0520 23:27:42.021989 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 20 23:27:42.022: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 20 23:27:42.023: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-a119ee91-e6d2-4494-8381-2ae02dfbdbb6 in namespace container-probe-2402 May 20 23:27:58.042: INFO: Started pod busybox-a119ee91-e6d2-4494-8381-2ae02dfbdbb6 in namespace container-probe-2402 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:27:58.045: INFO: Initial restart count of pod busybox-a119ee91-e6d2-4494-8381-2ae02dfbdbb6 is 0 May 20 23:28:46.159: INFO: Restart count of pod container-probe-2402/busybox-a119ee91-e6d2-4494-8381-2ae02dfbdbb6 is now 1 (48.114695078s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:46.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2402" for this suite. • [SLOW TEST:64.174 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":1,"skipped":119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:42.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 20 23:28:42.554: INFO: Waiting up to 5m0s for pod "security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509" in namespace "security-context-9509" to be "Succeeded or Failed" May 20 23:28:42.556: INFO: Pod "security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02412ms May 20 23:28:44.558: INFO: Pod "security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00440426s May 20 23:28:46.563: INFO: Pod "security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008531414s May 20 23:28:48.569: INFO: Pod "security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014671442s STEP: Saw pod success May 20 23:28:48.569: INFO: Pod "security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509" satisfied condition "Succeeded or Failed" May 20 23:28:48.572: INFO: Trying to get logs from node node2 pod security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509 container test-container: STEP: delete the pod May 20 23:28:48.626: INFO: Waiting for pod security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509 to disappear May 20 23:28:48.628: INFO: Pod security-context-915f27ac-fca6-4a8f-aa4f-05a1ece6a509 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:48.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9509" for this suite. • [SLOW TEST:6.113 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":3,"skipped":321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:42.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 20 23:28:42.671: INFO: Waiting up to 5m0s for pod "security-context-efd1b83d-51ab-4143-a728-f7949743b4b0" in namespace "security-context-2085" to be "Succeeded or Failed" May 20 23:28:42.674: INFO: Pod "security-context-efd1b83d-51ab-4143-a728-f7949743b4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.815002ms May 20 23:28:44.679: INFO: Pod "security-context-efd1b83d-51ab-4143-a728-f7949743b4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007419697s May 20 23:28:46.682: INFO: Pod "security-context-efd1b83d-51ab-4143-a728-f7949743b4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010815561s May 20 23:28:48.686: INFO: Pod "security-context-efd1b83d-51ab-4143-a728-f7949743b4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014722394s STEP: Saw pod success May 20 23:28:48.686: INFO: Pod "security-context-efd1b83d-51ab-4143-a728-f7949743b4b0" satisfied condition "Succeeded or Failed" May 20 23:28:48.690: INFO: Trying to get logs from node node2 pod security-context-efd1b83d-51ab-4143-a728-f7949743b4b0 container test-container: STEP: delete the pod May 20 23:28:48.703: INFO: Waiting for pod security-context-efd1b83d-51ab-4143-a728-f7949743b4b0 to disappear May 20 23:28:48.705: INFO: Pod security-context-efd1b83d-51ab-4143-a728-f7949743b4b0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:48.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2085" for this suite. • [SLOW TEST:6.100 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":6,"skipped":1166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:43.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 May 20 23:28:44.012: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-945440da-f118-42e5-9093-4e8941619290" in namespace "security-context-test-7386" to be "Succeeded or Failed" May 20 23:28:44.015: INFO: Pod "alpine-nnp-nil-945440da-f118-42e5-9093-4e8941619290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347226ms May 20 23:28:46.018: INFO: Pod "alpine-nnp-nil-945440da-f118-42e5-9093-4e8941619290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005388598s May 20 23:28:48.024: INFO: Pod "alpine-nnp-nil-945440da-f118-42e5-9093-4e8941619290": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01177886s May 20 23:28:50.028: INFO: Pod "alpine-nnp-nil-945440da-f118-42e5-9093-4e8941619290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015985793s May 20 23:28:50.028: INFO: Pod "alpine-nnp-nil-945440da-f118-42e5-9093-4e8941619290" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:50.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7386" for this suite. • [SLOW TEST:6.069 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":53,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:50.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 May 20 23:28:50.242: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:50.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-6765" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:50.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 20 23:28:50.373: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:50.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-4038" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.030 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:46.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 May 20 23:28:46.558: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-d1354565-025f-4ff6-8644-87bfbb6c7f87" in namespace "security-context-test-5364" to be "Succeeded or Failed" May 20 23:28:46.560: INFO: Pod "alpine-nnp-true-d1354565-025f-4ff6-8644-87bfbb6c7f87": Phase="Pending", Reason="", readiness=false. Elapsed: 1.994747ms May 20 23:28:48.564: INFO: Pod "alpine-nnp-true-d1354565-025f-4ff6-8644-87bfbb6c7f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005888759s May 20 23:28:50.566: INFO: Pod "alpine-nnp-true-d1354565-025f-4ff6-8644-87bfbb6c7f87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008457518s May 20 23:28:52.573: INFO: Pod "alpine-nnp-true-d1354565-025f-4ff6-8644-87bfbb6c7f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015123258s May 20 23:28:52.573: INFO: Pod "alpine-nnp-true-d1354565-025f-4ff6-8644-87bfbb6c7f87" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:52.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5364" for this suite. • [SLOW TEST:6.064 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":307,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:52.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:52.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-928" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":361,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:52.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 20 23:28:52.826: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:52.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-3007" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.031 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:29.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination May 20 23:28:53.388: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:53.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8071" for this suite. • [SLOW TEST:24.092 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":5,"skipped":231,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:50.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 20 23:28:50.649: INFO: Waiting up to 5m0s for pod "security-context-00fba251-d827-41d7-a241-1eef4b5674df" in namespace "security-context-5753" to be "Succeeded or Failed" May 20 23:28:50.651: INFO: Pod "security-context-00fba251-d827-41d7-a241-1eef4b5674df": Phase="Pending", Reason="", readiness=false. Elapsed: 1.817244ms May 20 23:28:52.667: INFO: Pod "security-context-00fba251-d827-41d7-a241-1eef4b5674df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018317253s May 20 23:28:54.671: INFO: Pod "security-context-00fba251-d827-41d7-a241-1eef4b5674df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02193813s May 20 23:28:56.675: INFO: Pod "security-context-00fba251-d827-41d7-a241-1eef4b5674df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026283062s STEP: Saw pod success May 20 23:28:56.675: INFO: Pod "security-context-00fba251-d827-41d7-a241-1eef4b5674df" satisfied condition "Succeeded or Failed" May 20 23:28:56.677: INFO: Trying to get logs from node node1 pod security-context-00fba251-d827-41d7-a241-1eef4b5674df container test-container: STEP: delete the pod May 20 23:28:56.797: INFO: Waiting for pod security-context-00fba251-d827-41d7-a241-1eef4b5674df to disappear May 20 23:28:56.799: INFO: Pod security-context-00fba251-d827-41d7-a241-1eef4b5674df no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:28:56.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5753" for this suite. • [SLOW TEST:6.193 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:57.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-be940a2c-5980-4a57-a905-72a44d79fe0d in namespace container-probe-382 May 20 23:28:03.306: INFO: Started pod startup-be940a2c-5980-4a57-a905-72a44d79fe0d in namespace container-probe-382 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:28:03.316: INFO: Initial restart count of pod startup-be940a2c-5980-4a57-a905-72a44d79fe0d is 0 May 20 23:29:09.476: INFO: Restart count of pod container-probe-382/startup-be940a2c-5980-4a57-a905-72a44d79fe0d is now 1 (1m6.160424712s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:09.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-382" for this suite. • [SLOW TEST:72.228 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":2,"skipped":182,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:09.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-2712/configmap-test-021dcb85-9ec1-49d0-8984-1af63ab33306 STEP: Updating configMap configmap-2712/configmap-test-021dcb85-9ec1-49d0-8984-1af63ab33306 STEP: Verifying update of ConfigMap configmap-2712/configmap-test-021dcb85-9ec1-49d0-8984-1af63ab33306 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:09.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2712" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":3,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:09.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 20 23:29:09.999: INFO: Waiting up to 5m0s for pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae" in namespace "security-context-2558" to be "Succeeded or Failed" May 20 23:29:10.003: INFO: Pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.559636ms May 20 23:29:12.008: INFO: Pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008128024s May 20 23:29:14.013: INFO: Pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013455243s May 20 23:29:16.019: INFO: Pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019940065s May 20 23:29:18.025: INFO: Pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.025313135s STEP: Saw pod success May 20 23:29:18.025: INFO: Pod "security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae" satisfied condition "Succeeded or Failed" May 20 23:29:18.027: INFO: Trying to get logs from node node1 pod security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae container test-container: STEP: delete the pod May 20 23:29:18.041: INFO: Waiting for pod security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae to disappear May 20 23:29:18.043: INFO: Pod security-context-39255806-fcb4-4d47-aae8-4d1ace6706ae no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:18.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2558" for this suite. • [SLOW TEST:8.086 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":4,"skipped":400,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:18.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 May 20 23:29:18.142: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:20.145: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:22.147: INFO: The status of Pod master is Running (Ready = true) May 20 23:29:22.162: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:24.168: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:26.169: INFO: The status of Pod slave is Running (Ready = true) May 20 23:29:26.184: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:28.189: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:30.188: INFO: The status of Pod private is Running (Ready = true) May 20 23:29:30.205: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:32.208: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 20 23:29:34.211: INFO: The status of Pod default is Running (Ready = true) May 20 23:29:34.216: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.216: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.306: INFO: Exec stderr: "" May 20 23:29:34.308: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.309: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.411: INFO: Exec stderr: "" May 20 23:29:34.414: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.414: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.497: INFO: Exec stderr: "" May 20 23:29:34.499: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.499: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.580: INFO: Exec stderr: "" May 20 23:29:34.583: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.583: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.686: INFO: Exec stderr: "" May 20 23:29:34.689: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.690: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.779: INFO: Exec stderr: "" May 20 23:29:34.782: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.783: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.866: INFO: Exec stderr: "" May 20 23:29:34.868: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.868: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:34.953: INFO: Exec stderr: "" May 20 23:29:34.956: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:34.956: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.052: INFO: Exec stderr: "" May 20 23:29:35.055: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.055: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.151: INFO: Exec stderr: "" May 20 23:29:35.154: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.154: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.241: INFO: Exec stderr: "" May 20 23:29:35.245: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.245: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.332: INFO: Exec stderr: "" May 20 23:29:35.335: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.335: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.430: INFO: Exec stderr: "" May 20 23:29:35.433: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.433: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.522: INFO: Exec stderr: "" May 20 23:29:35.525: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.525: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.601: INFO: Exec stderr: "" May 20 23:29:35.605: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.605: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.718: INFO: Exec stderr: "" May 20 23:29:35.721: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.721: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.813: INFO: Exec stderr: "" May 20 23:29:35.818: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.818: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.899: INFO: Exec stderr: "" May 20 23:29:35.902: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.902: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:35.992: INFO: Exec stderr: "" May 20 23:29:35.995: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:35.995: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:36.075: INFO: Exec stderr: "" May 20 23:29:38.094: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-5732"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-5732"/host; echo host > "/var/lib/kubelet/mount-propagation-5732"/host/file] Namespace:mount-propagation-5732 PodName:hostexec-node1-hch7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 20 23:29:38.094: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.185: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.185: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.271: INFO: pod master mount master: stdout: "master", stderr: "" error: May 20 23:29:38.274: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.274: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.353: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:38.356: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.356: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.455: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:38.457: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.457: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.534: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:38.537: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.537: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.613: INFO: pod master mount host: stdout: "host", stderr: "" error: May 20 23:29:38.616: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.616: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.723: INFO: pod slave mount master: stdout: "master", stderr: "" error: May 20 23:29:38.727: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.727: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.824: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: May 20 23:29:38.827: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.827: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:38.925: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:38.930: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:38.930: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.018: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.021: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.021: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.107: INFO: pod slave mount host: stdout: "host", stderr: "" error: May 20 23:29:39.110: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.110: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.290: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.292: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.292: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.381: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.384: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.384: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.468: INFO: pod private mount private: stdout: "private", stderr: "" error: May 20 23:29:39.471: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.471: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.566: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.569: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.569: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.652: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.655: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.655: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.756: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.759: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.759: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.872: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.874: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.874: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:39.958: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:39.960: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:39.960: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.063: INFO: pod default mount default: stdout: "default", stderr: "" error: May 20 23:29:40.066: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:40.066: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.156: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 20 23:29:40.156: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-5732"/master/file` = master] Namespace:mount-propagation-5732 PodName:hostexec-node1-hch7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 20 23:29:40.156: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.254: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-5732"/slave/file] Namespace:mount-propagation-5732 PodName:hostexec-node1-hch7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 20 23:29:40.254: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.340: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-5732"/host] Namespace:mount-propagation-5732 PodName:hostexec-node1-hch7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 20 23:29:40.340: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.435: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-5732 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:40.435: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.537: INFO: Exec stderr: "" May 20 23:29:40.539: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-5732 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:40.540: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.642: INFO: Exec stderr: "" May 20 23:29:40.645: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-5732 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:40.645: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.748: INFO: Exec stderr: "" May 20 23:29:40.752: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-5732 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 20 23:29:40.752: INFO: >>> kubeConfig: /root/.kube/config May 20 23:29:40.921: INFO: Exec stderr: "" May 20 23:29:40.921: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-5732"] Namespace:mount-propagation-5732 PodName:hostexec-node1-hch7v ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 20 23:29:40.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node1-hch7v in namespace mount-propagation-5732 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:41.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-5732" for this suite. • [SLOW TEST:22.915 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":5,"skipped":422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:41.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-7a84978a-14ea-4bec-a936-30a5fbbb4fa4 in namespace container-probe-404 May 20 23:29:45.144: INFO: Started pod liveness-override-7a84978a-14ea-4bec-a936-30a5fbbb4fa4 in namespace container-probe-404 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:29:45.147: INFO: Initial restart count of pod liveness-override-7a84978a-14ea-4bec-a936-30a5fbbb4fa4 is 0 May 20 23:29:47.155: INFO: Restart count of pod container-probe-404/liveness-override-7a84978a-14ea-4bec-a936-30a5fbbb4fa4 is now 1 (2.007534925s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:47.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-404" for this suite. • [SLOW TEST:6.070 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":6,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:47.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:51.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6711" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":7,"skipped":527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:57.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-4d21d23d-4715-4fb7-9421-765d160263b0 in namespace container-probe-7588 May 20 23:29:01.273: INFO: Started pod busybox-4d21d23d-4715-4fb7-9421-765d160263b0 in namespace container-probe-7588 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:29:01.276: INFO: Initial restart count of pod busybox-4d21d23d-4715-4fb7-9421-765d160263b0 is 0 May 20 23:29:51.407: INFO: Restart count of pod container-probe-7588/busybox-4d21d23d-4715-4fb7-9421-765d160263b0 is now 1 (50.131394301s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:51.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7588" for this suite. • [SLOW TEST:54.193 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:51.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 20 23:29:51.667: INFO: Waiting up to 5m0s for pod "security-context-b218361a-d075-44d2-bf86-838efdd6062a" in namespace "security-context-8449" to be "Succeeded or Failed" May 20 23:29:51.669: INFO: Pod "security-context-b218361a-d075-44d2-bf86-838efdd6062a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.159665ms May 20 23:29:53.673: INFO: Pod "security-context-b218361a-d075-44d2-bf86-838efdd6062a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005610227s May 20 23:29:55.675: INFO: Pod "security-context-b218361a-d075-44d2-bf86-838efdd6062a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008145781s STEP: Saw pod success May 20 23:29:55.675: INFO: Pod "security-context-b218361a-d075-44d2-bf86-838efdd6062a" satisfied condition "Succeeded or Failed" May 20 23:29:55.677: INFO: Trying to get logs from node node2 pod security-context-b218361a-d075-44d2-bf86-838efdd6062a container test-container: STEP: delete the pod May 20 23:29:55.699: INFO: Waiting for pod security-context-b218361a-d075-44d2-bf86-838efdd6062a to disappear May 20 23:29:55.701: INFO: Pod security-context-b218361a-d075-44d2-bf86-838efdd6062a no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:55.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8449" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":657,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:51.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:55.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3285" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":5,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 20 23:29:55.912: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:52.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-46759870-187a-4530-ab04-f192267902a3 in namespace container-probe-3427 May 20 23:28:58.918: INFO: Started pod busybox-46759870-187a-4530-ab04-f192267902a3 in namespace container-probe-3427 May 20 23:28:58.918: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (1.826µs elapsed) May 20 23:29:00.921: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (2.00262288s elapsed) May 20 23:29:02.922: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (4.004485677s elapsed) May 20 23:29:04.923: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (6.004651284s elapsed) May 20 23:29:06.925: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (8.006550206s elapsed) May 20 23:29:08.929: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (10.010629306s elapsed) May 20 23:29:10.932: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (12.014234087s elapsed) May 20 23:29:12.935: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (14.016542667s elapsed) May 20 23:29:14.936: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (16.017592203s elapsed) May 20 23:29:16.936: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (18.018310381s elapsed) May 20 23:29:18.938: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (20.020447446s elapsed) May 20 23:29:20.940: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (22.022141873s elapsed) May 20 23:29:22.942: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (24.024075147s elapsed) May 20 23:29:24.942: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (26.024412683s elapsed) May 20 23:29:26.943: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (28.025006082s elapsed) May 20 23:29:28.944: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (30.025980906s elapsed) May 20 23:29:30.947: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (32.028766411s elapsed) May 20 23:29:32.949: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (34.031224232s elapsed) May 20 23:29:34.950: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (36.031716425s elapsed) May 20 23:29:36.951: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (38.03309032s elapsed) May 20 23:29:38.952: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (40.03431729s elapsed) May 20 23:29:40.953: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (42.034869846s elapsed) May 20 23:29:42.956: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (44.03755278s elapsed) May 20 23:29:44.956: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (46.037631955s elapsed) May 20 23:29:46.956: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (48.038395856s elapsed) May 20 23:29:48.959: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (50.040511056s elapsed) May 20 23:29:50.959: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (52.041114438s elapsed) May 20 23:29:52.961: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (54.043444236s elapsed) May 20 23:29:54.962: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (56.043997442s elapsed) May 20 23:29:56.964: INFO: pod container-probe-3427/busybox-46759870-187a-4530-ab04-f192267902a3 is not ready (58.045537075s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:29:58.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3427" for this suite. • [SLOW TEST:66.102 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":4,"skipped":411,"failed":0} May 20 23:29:58.980: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:29:55.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 May 20 23:29:55.840: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-b06e5894-98ac-4370-b5d9-e70f775b6455" in namespace "security-context-test-4944" to be "Succeeded or Failed" May 20 23:29:55.842: INFO: Pod "busybox-readonly-true-b06e5894-98ac-4370-b5d9-e70f775b6455": Phase="Pending", Reason="", readiness=false. Elapsed: 1.947664ms May 20 23:29:57.846: INFO: Pod "busybox-readonly-true-b06e5894-98ac-4370-b5d9-e70f775b6455": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005705858s May 20 23:29:59.852: INFO: Pod "busybox-readonly-true-b06e5894-98ac-4370-b5d9-e70f775b6455": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011659534s May 20 23:30:01.855: INFO: Pod "busybox-readonly-true-b06e5894-98ac-4370-b5d9-e70f775b6455": Phase="Failed", Reason="", readiness=false. Elapsed: 6.014787372s May 20 23:30:01.855: INFO: Pod "busybox-readonly-true-b06e5894-98ac-4370-b5d9-e70f775b6455" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:30:01.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4944" for this suite. • [SLOW TEST:6.056 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":6,"skipped":684,"failed":0} May 20 23:30:01.866: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:49.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 20 23:28:49.368: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 May 20 23:28:49.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3278 create -f -' May 20 23:28:49.844: INFO: stderr: "" May 20 23:28:49.844: INFO: stdout: "pod/liveness-exec created\n" May 20 23:28:49.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-3278 create -f -' May 20 23:28:50.210: INFO: stderr: "" May 20 23:28:50.210: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 20 23:28:54.224: INFO: Pod: liveness-http, restart count:0 May 20 23:28:56.222: INFO: Pod: liveness-exec, restart count:0 May 20 23:28:56.227: INFO: Pod: liveness-http, restart count:0 May 20 23:28:58.228: INFO: Pod: liveness-exec, restart count:0 May 20 23:28:58.230: INFO: Pod: liveness-http, restart count:0 May 20 23:29:00.232: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:00.235: INFO: Pod: liveness-http, restart count:0 May 20 23:29:02.238: INFO: Pod: liveness-http, restart count:0 May 20 23:29:02.238: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:04.247: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:04.247: INFO: Pod: liveness-http, restart count:0 May 20 23:29:06.252: INFO: Pod: liveness-http, restart count:0 May 20 23:29:06.252: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:08.261: INFO: Pod: liveness-http, restart count:0 May 20 23:29:08.261: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:10.265: INFO: Pod: liveness-http, restart count:0 May 20 23:29:10.265: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:12.269: INFO: Pod: liveness-http, restart count:0 May 20 23:29:12.269: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:14.274: INFO: Pod: liveness-http, restart count:0 May 20 23:29:14.274: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:16.281: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:16.281: INFO: Pod: liveness-http, restart count:0 May 20 23:29:18.287: INFO: Pod: liveness-http, restart count:0 May 20 23:29:18.287: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:20.290: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:20.290: INFO: Pod: liveness-http, restart count:0 May 20 23:29:22.297: INFO: Pod: liveness-http, restart count:0 May 20 23:29:22.297: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:24.305: INFO: Pod: liveness-http, restart count:0 May 20 23:29:24.305: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:26.310: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:26.310: INFO: Pod: liveness-http, restart count:0 May 20 23:29:28.315: INFO: Pod: liveness-http, restart count:0 May 20 23:29:28.315: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:30.318: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:30.318: INFO: Pod: liveness-http, restart count:0 May 20 23:29:32.323: INFO: Pod: liveness-http, restart count:1 May 20 23:29:32.324: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:32.324: INFO: Saw liveness-http restart, succeeded... May 20 23:29:34.332: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:36.336: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:38.340: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:40.343: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:42.349: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:44.356: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:46.362: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:48.368: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:50.371: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:52.376: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:54.384: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:56.388: INFO: Pod: liveness-exec, restart count:0 May 20 23:29:58.393: INFO: Pod: liveness-exec, restart count:0 May 20 23:30:00.397: INFO: Pod: liveness-exec, restart count:0 May 20 23:30:02.402: INFO: Pod: liveness-exec, restart count:1 May 20 23:30:02.402: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:30:02.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-3278" for this suite. • [SLOW TEST:73.077 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:48.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay May 20 23:28:52.410: INFO: watch delete seen for pod-submit-status-1-0 May 20 23:28:52.410: INFO: Pod pod-submit-status-1-0 on node node1 timings total=3.623396191s t=1.971s run=0s execute=0s May 20 23:29:05.803: INFO: watch delete seen for pod-submit-status-0-0 May 20 23:29:05.803: INFO: Pod pod-submit-status-0-0 on node node1 timings total=17.016690697s t=485ms run=0s execute=0s May 20 23:29:05.814: INFO: watch delete seen for pod-submit-status-2-0 May 20 23:29:05.815: INFO: Pod pod-submit-status-2-0 on node node1 timings total=17.027993179s t=794ms run=0s execute=0s May 20 23:29:07.016: INFO: watch delete seen for pod-submit-status-1-1 May 20 23:29:07.016: INFO: Pod pod-submit-status-1-1 on node node2 timings total=14.605823077s t=763ms run=0s execute=0s May 20 23:29:16.242: INFO: watch delete seen for pod-submit-status-2-1 May 20 23:29:16.242: INFO: Pod pod-submit-status-2-1 on node node1 timings total=10.427320719s t=701ms run=0s execute=0s May 20 23:29:16.259: INFO: watch delete seen for pod-submit-status-1-2 May 20 23:29:16.259: INFO: Pod pod-submit-status-1-2 on node node1 timings total=9.242663293s t=1.996s run=0s execute=0s May 20 23:29:16.812: INFO: watch delete seen for pod-submit-status-0-1 May 20 23:29:16.812: INFO: Pod pod-submit-status-0-1 on node node2 timings total=11.009100407s t=1.091s run=2s execute=0s May 20 23:29:17.036: INFO: watch delete seen for pod-submit-status-0-2 May 20 23:29:17.036: INFO: Pod pod-submit-status-0-2 on node node1 timings total=223.816636ms t=0s run=0s execute=0s May 20 23:29:25.781: INFO: watch delete seen for pod-submit-status-1-3 May 20 23:29:25.781: INFO: Pod pod-submit-status-1-3 on node node1 timings total=9.522043522s t=1.915s run=0s execute=0s May 20 23:29:26.931: INFO: watch delete seen for pod-submit-status-2-2 May 20 23:29:26.931: INFO: Pod pod-submit-status-2-2 on node node2 timings total=10.689070437s t=1.541s run=0s execute=0s May 20 23:29:36.819: INFO: watch delete seen for pod-submit-status-2-3 May 20 23:29:36.819: INFO: Pod pod-submit-status-2-3 on node node2 timings total=9.887560418s t=1.282s run=0s execute=0s May 20 23:29:36.826: INFO: watch delete seen for pod-submit-status-1-4 May 20 23:29:36.826: INFO: Pod pod-submit-status-1-4 on node node2 timings total=11.044891456s t=654ms run=0s execute=0s May 20 23:29:46.819: INFO: watch delete seen for pod-submit-status-2-4 May 20 23:29:46.819: INFO: Pod pod-submit-status-2-4 on node node2 timings total=10.000230611s t=756ms run=0s execute=0s May 20 23:29:46.829: INFO: watch delete seen for pod-submit-status-1-5 May 20 23:29:46.829: INFO: Pod pod-submit-status-1-5 on node node2 timings total=10.003497111s t=1.207s run=0s execute=0s May 20 23:29:53.202: INFO: watch delete seen for pod-submit-status-1-6 May 20 23:29:53.202: INFO: Pod pod-submit-status-1-6 on node node1 timings total=6.372977879s t=1.689s run=0s execute=0s May 20 23:29:55.170: INFO: watch delete seen for pod-submit-status-0-3 May 20 23:29:55.170: INFO: Pod pod-submit-status-0-3 on node node2 timings total=38.133650748s t=106ms run=0s execute=0s May 20 23:29:55.930: INFO: watch delete seen for pod-submit-status-2-5 May 20 23:29:55.930: INFO: Pod pod-submit-status-2-5 on node node1 timings total=9.1111173s t=1.308s run=0s execute=0s May 20 23:29:57.361: INFO: watch delete seen for pod-submit-status-1-7 May 20 23:29:57.361: INFO: Pod pod-submit-status-1-7 on node node2 timings total=4.158543493s t=836ms run=0s execute=0s May 20 23:29:59.371: INFO: watch delete seen for pod-submit-status-2-6 May 20 23:29:59.371: INFO: Pod pod-submit-status-2-6 on node node2 timings total=3.440915489s t=1.349s run=0s execute=0s May 20 23:30:06.816: INFO: watch delete seen for pod-submit-status-0-4 May 20 23:30:06.816: INFO: Pod pod-submit-status-0-4 on node node2 timings total=11.645588037s t=726ms run=0s execute=0s May 20 23:30:06.824: INFO: watch delete seen for pod-submit-status-1-8 May 20 23:30:06.824: INFO: Pod pod-submit-status-1-8 on node node2 timings total=9.463232612s t=1.309s run=0s execute=0s May 20 23:30:09.436: INFO: watch delete seen for pod-submit-status-0-5 May 20 23:30:09.436: INFO: Pod pod-submit-status-0-5 on node node2 timings total=2.620195116s t=634ms run=0s execute=0s May 20 23:30:15.819: INFO: watch delete seen for pod-submit-status-2-7 May 20 23:30:15.819: INFO: Pod pod-submit-status-2-7 on node node1 timings total=16.447589421s t=1.836s run=0s execute=0s May 20 23:30:16.834: INFO: watch delete seen for pod-submit-status-1-9 May 20 23:30:16.835: INFO: Pod pod-submit-status-1-9 on node node2 timings total=10.010219954s t=643ms run=0s execute=0s May 20 23:30:16.842: INFO: watch delete seen for pod-submit-status-0-6 May 20 23:30:16.842: INFO: Pod pod-submit-status-0-6 on node node2 timings total=7.405739406s t=579ms run=0s execute=0s May 20 23:30:25.153: INFO: watch delete seen for pod-submit-status-1-10 May 20 23:30:25.153: INFO: Pod pod-submit-status-1-10 on node node1 timings total=8.318539623s t=867ms run=0s execute=0s May 20 23:30:26.844: INFO: watch delete seen for pod-submit-status-0-7 May 20 23:30:26.845: INFO: Pod pod-submit-status-0-7 on node node2 timings total=10.002691862s t=1.224s run=0s execute=0s May 20 23:30:29.741: INFO: watch delete seen for pod-submit-status-0-8 May 20 23:30:29.741: INFO: Pod pod-submit-status-0-8 on node node2 timings total=2.896870816s t=415ms run=0s execute=0s May 20 23:30:35.785: INFO: watch delete seen for pod-submit-status-2-8 May 20 23:30:35.785: INFO: Pod pod-submit-status-2-8 on node node1 timings total=19.966507209s t=1.051s run=0s execute=0s May 20 23:30:35.794: INFO: watch delete seen for pod-submit-status-1-11 May 20 23:30:35.794: INFO: Pod pod-submit-status-1-11 on node node1 timings total=10.640567306s t=1.589s run=2s execute=0s May 20 23:30:46.479: INFO: watch delete seen for pod-submit-status-1-12 May 20 23:30:46.479: INFO: Pod pod-submit-status-1-12 on node node1 timings total=10.685143987s t=302ms run=0s execute=0s May 20 23:30:46.487: INFO: watch delete seen for pod-submit-status-2-9 May 20 23:30:46.487: INFO: Pod pod-submit-status-2-9 on node node1 timings total=10.701487524s t=1.104s run=0s execute=0s May 20 23:30:46.497: INFO: watch delete seen for pod-submit-status-0-9 May 20 23:30:46.497: INFO: Pod pod-submit-status-0-9 on node node1 timings total=16.755924468s t=1.797s run=0s execute=0s May 20 23:30:55.782: INFO: watch delete seen for pod-submit-status-2-10 May 20 23:30:55.782: INFO: Pod pod-submit-status-2-10 on node node1 timings total=9.295031423s t=888ms run=0s execute=0s May 20 23:30:55.792: INFO: watch delete seen for pod-submit-status-1-13 May 20 23:30:55.792: INFO: Pod pod-submit-status-1-13 on node node1 timings total=9.312567422s t=94ms run=0s execute=0s May 20 23:30:56.819: INFO: watch delete seen for pod-submit-status-0-10 May 20 23:30:56.819: INFO: Pod pod-submit-status-0-10 on node node2 timings total=10.321849734s t=92ms run=0s execute=0s May 20 23:30:58.526: INFO: watch delete seen for pod-submit-status-2-11 May 20 23:30:58.526: INFO: Pod pod-submit-status-2-11 on node node1 timings total=2.744004188s t=326ms run=0s execute=0s May 20 23:31:05.786: INFO: watch delete seen for pod-submit-status-1-14 May 20 23:31:05.786: INFO: Pod pod-submit-status-1-14 on node node1 timings total=9.993978981s t=1.472s run=0s execute=0s May 20 23:31:06.815: INFO: watch delete seen for pod-submit-status-0-11 May 20 23:31:06.815: INFO: Pod pod-submit-status-0-11 on node node2 timings total=9.995384464s t=867ms run=2s execute=0s May 20 23:31:16.577: INFO: watch delete seen for pod-submit-status-2-12 May 20 23:31:16.578: INFO: Pod pod-submit-status-2-12 on node node1 timings total=18.051328404s t=680ms run=0s execute=0s May 20 23:31:16.819: INFO: watch delete seen for pod-submit-status-0-12 May 20 23:31:16.819: INFO: Pod pod-submit-status-0-12 on node node2 timings total=10.004346255s t=1.476s run=2s execute=0s May 20 23:31:25.805: INFO: watch delete seen for pod-submit-status-0-13 May 20 23:31:25.805: INFO: Pod pod-submit-status-0-13 on node node1 timings total=8.985490475s t=1.497s run=0s execute=0s May 20 23:31:25.815: INFO: watch delete seen for pod-submit-status-2-13 May 20 23:31:25.815: INFO: Pod pod-submit-status-2-13 on node node1 timings total=9.23698645s t=25ms run=0s execute=0s May 20 23:31:35.786: INFO: watch delete seen for pod-submit-status-0-14 May 20 23:31:35.786: INFO: Pod pod-submit-status-0-14 on node node1 timings total=9.981457186s t=717ms run=0s execute=0s May 20 23:31:35.796: INFO: watch delete seen for pod-submit-status-2-14 May 20 23:31:35.796: INFO: Pod pod-submit-status-2-14 on node node1 timings total=9.981613488s t=1.945s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:31:35.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4181" for this suite. • [SLOW TEST:167.043 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":4,"skipped":383,"failed":0} May 20 23:31:35.809: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:14.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 May 20 23:28:14.583: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:16.586: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:18.587: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:20.586: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:22.587: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 20 23:28:24.589: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 May 20 23:29:26.600: INFO: getRestartDelay: restartCount = 3, finishedAt=2022-05-20 23:28:53 +0000 UTC restartedAt=2022-05-20 23:29:25 +0000 UTC (32s) STEP: getting restart delay-1 May 20 23:30:16.799: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-05-20 23:29:30 +0000 UTC restartedAt=2022-05-20 23:30:16 +0000 UTC (46s) STEP: getting restart delay-2 May 20 23:31:55.221: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-05-20 23:30:21 +0000 UTC restartedAt=2022-05-20 23:31:54 +0000 UTC (1m33s) STEP: updating the image May 20 23:31:55.731: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 20 23:32:19.798: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-05-20 23:32:04 +0000 UTC restartedAt=2022-05-20 23:32:18 +0000 UTC (14s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:32:19.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2793" for this suite. • [SLOW TEST:245.260 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":3,"skipped":324,"failed":0} May 20 23:32:19.810: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:44.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-216ec48e-e272-49f3-83d1-b49e444722de in namespace container-probe-291 May 20 23:28:50.207: INFO: Started pod startup-216ec48e-e272-49f3-83d1-b49e444722de in namespace container-probe-291 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:28:50.209: INFO: Initial restart count of pod startup-216ec48e-e272-49f3-83d1-b49e444722de is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:32:50.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-291" for this suite. • [SLOW TEST:246.737 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":8,"skipped":502,"failed":0} May 20 23:32:50.906: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:53.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-f497eb6e-c5c6-4883-bf18-cf5b04c367b2 in namespace container-probe-5290 May 20 23:28:59.458: INFO: Started pod liveness-f497eb6e-c5c6-4883-bf18-cf5b04c367b2 in namespace container-probe-5290 STEP: checking the pod's current state and verifying that restartCount is present May 20 23:28:59.460: INFO: Initial restart count of pod liveness-f497eb6e-c5c6-4883-bf18-cf5b04c367b2 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:33:00.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5290" for this suite. • [SLOW TEST:246.638 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":6,"skipped":240,"failed":0} May 20 23:33:00.057: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:28:37.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready May 20 23:28:37.437: INFO: Waiting up to 5m0s for node node1 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 20 23:28:38.450: INFO: node status heartbeat is unchanged for 1.004368934s, waiting for 1m20s May 20 23:28:39.449: INFO: node status heartbeat is unchanged for 2.004114783s, waiting for 1m20s May 20 23:28:40.449: INFO: node status heartbeat is unchanged for 3.003860489s, waiting for 1m20s May 20 23:28:41.450: INFO: node status heartbeat is unchanged for 4.004389852s, waiting for 1m20s May 20 23:28:42.449: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:28:42.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:41 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:41 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:31 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:41 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:28:43.450: INFO: node status heartbeat is unchanged for 1.000393018s, waiting for 1m20s May 20 23:28:44.450: INFO: node status heartbeat is unchanged for 2.000782457s, waiting for 1m20s May 20 23:28:45.449: INFO: node status heartbeat is unchanged for 2.999737625s, waiting for 1m20s May 20 23:28:46.450: INFO: node status heartbeat is unchanged for 4.000733741s, waiting for 1m20s May 20 23:28:47.449: INFO: node status heartbeat is unchanged for 4.999881685s, waiting for 1m20s May 20 23:28:48.449: INFO: node status heartbeat is unchanged for 6.000221937s, waiting for 1m20s May 20 23:28:49.449: INFO: node status heartbeat is unchanged for 6.999779666s, waiting for 1m20s May 20 23:28:50.449: INFO: node status heartbeat is unchanged for 7.999637353s, waiting for 1m20s May 20 23:28:51.449: INFO: node status heartbeat is unchanged for 8.999676244s, waiting for 1m20s May 20 23:28:52.449: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:28:52.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:51 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:51 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:41 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:51 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:28:53.449: INFO: node status heartbeat is unchanged for 1.000133614s, waiting for 1m20s May 20 23:28:54.450: INFO: node status heartbeat is unchanged for 2.001061307s, waiting for 1m20s May 20 23:28:55.448: INFO: node status heartbeat is unchanged for 2.999092813s, waiting for 1m20s May 20 23:28:56.450: INFO: node status heartbeat is unchanged for 4.000304595s, waiting for 1m20s May 20 23:28:57.448: INFO: node status heartbeat is unchanged for 4.999055619s, waiting for 1m20s May 20 23:28:58.449: INFO: node status heartbeat is unchanged for 5.999930237s, waiting for 1m20s May 20 23:28:59.452: INFO: node status heartbeat is unchanged for 7.002988448s, waiting for 1m20s May 20 23:29:00.449: INFO: node status heartbeat is unchanged for 7.999800607s, waiting for 1m20s May 20 23:29:01.449: INFO: node status heartbeat is unchanged for 9.000107054s, waiting for 1m20s May 20 23:29:02.451: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 20 23:29:02.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:28:51 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:29:03.449: INFO: node status heartbeat is unchanged for 997.919548ms, waiting for 1m20s May 20 23:29:04.450: INFO: node status heartbeat is unchanged for 1.999339397s, waiting for 1m20s May 20 23:29:05.448: INFO: node status heartbeat is unchanged for 2.997276852s, waiting for 1m20s May 20 23:29:06.449: INFO: node status heartbeat is unchanged for 3.997728118s, waiting for 1m20s May 20 23:29:07.450: INFO: node status heartbeat is unchanged for 4.999306589s, waiting for 1m20s May 20 23:29:08.451: INFO: node status heartbeat is unchanged for 5.999882888s, waiting for 1m20s May 20 23:29:09.449: INFO: node status heartbeat is unchanged for 6.997889353s, waiting for 1m20s May 20 23:29:10.451: INFO: node status heartbeat is unchanged for 7.999651772s, waiting for 1m20s May 20 23:29:11.450: INFO: node status heartbeat is unchanged for 8.998936815s, waiting for 1m20s May 20 23:29:12.449: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:29:12.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:12 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:29:13.452: INFO: node status heartbeat is unchanged for 1.002607966s, waiting for 1m20s May 20 23:29:14.451: INFO: node status heartbeat is unchanged for 2.002100394s, waiting for 1m20s May 20 23:29:15.450: INFO: node status heartbeat is unchanged for 3.000445251s, waiting for 1m20s May 20 23:29:16.451: INFO: node status heartbeat is unchanged for 4.001738074s, waiting for 1m20s May 20 23:29:17.450: INFO: node status heartbeat is unchanged for 5.001063787s, waiting for 1m20s May 20 23:29:18.449: INFO: node status heartbeat is unchanged for 5.999855087s, waiting for 1m20s May 20 23:29:19.449: INFO: node status heartbeat is unchanged for 6.999972934s, waiting for 1m20s May 20 23:29:20.450: INFO: node status heartbeat is unchanged for 8.000403363s, waiting for 1m20s May 20 23:29:21.449: INFO: node status heartbeat is unchanged for 8.999592377s, waiting for 1m20s May 20 23:29:22.452: INFO: node status heartbeat is unchanged for 10.002996613s, waiting for 1m20s May 20 23:29:23.450: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:29:23.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:12 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:22 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, NodeInfo: {MachineID: "f2f0a31e38e446cda6cf4c679d8a2ef5", SystemUUID: "00CDA902-D022-E711-906E-0017A4403562", BootID: "c988afd2-8149-4515-9a6f-832552c2ed2d", KernelVersion: "3.10.0-1160.66.1.el7.x86_64", ...}, Images: []v1.ContainerImage{ ... // 31 identical elements {Names: {"quay.io/prometheus-operator/prometheus-config-reloader@sha256:4d"..., "quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1"}, SizeBytes: 13433274}, {Names: {"gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebc"..., "gcr.io/google-samples/hello-go-gke:1.0"}, SizeBytes: 11443478}, + { + Names: []string{ + "k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf"..., + "k8s.gcr.io/e2e-test-images/nonewprivs:1.3", + }, + SizeBytes: 7107254, + }, {Names: {"appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172"..., "appropriate/curl:edge"}, SizeBytes: 5654234}, {Names: {"alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f7"..., "alpine:3.12"}, SizeBytes: 5581590}, ... // 5 identical elements }, VolumesInUse: nil, VolumesAttached: nil, Config: nil, } May 20 23:29:24.451: INFO: node status heartbeat is unchanged for 1.001439222s, waiting for 1m20s May 20 23:29:25.450: INFO: node status heartbeat is unchanged for 2.000702117s, waiting for 1m20s May 20 23:29:26.450: INFO: node status heartbeat is unchanged for 2.999779769s, waiting for 1m20s May 20 23:29:27.451: INFO: node status heartbeat is unchanged for 4.000800845s, waiting for 1m20s May 20 23:29:28.450: INFO: node status heartbeat is unchanged for 5.000256525s, waiting for 1m20s May 20 23:29:29.450: INFO: node status heartbeat is unchanged for 6.000531768s, waiting for 1m20s May 20 23:29:30.450: INFO: node status heartbeat is unchanged for 6.999994343s, waiting for 1m20s May 20 23:29:31.450: INFO: node status heartbeat is unchanged for 7.999880257s, waiting for 1m20s May 20 23:29:32.454: INFO: node status heartbeat is unchanged for 9.004205381s, waiting for 1m20s May 20 23:29:33.450: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:29:33.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:22 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:32 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:29:34.454: INFO: node status heartbeat is unchanged for 1.004152497s, waiting for 1m20s May 20 23:29:35.449: INFO: node status heartbeat is unchanged for 1.999669384s, waiting for 1m20s May 20 23:29:36.450: INFO: node status heartbeat is unchanged for 3.000184026s, waiting for 1m20s May 20 23:29:37.450: INFO: node status heartbeat is unchanged for 4.000411243s, waiting for 1m20s May 20 23:29:38.451: INFO: node status heartbeat is unchanged for 5.000840442s, waiting for 1m20s May 20 23:29:39.452: INFO: node status heartbeat is unchanged for 6.002527667s, waiting for 1m20s May 20 23:29:40.448: INFO: node status heartbeat is unchanged for 6.99874215s, waiting for 1m20s May 20 23:29:41.450: INFO: node status heartbeat is unchanged for 7.999885676s, waiting for 1m20s May 20 23:29:42.452: INFO: node status heartbeat is unchanged for 9.001902264s, waiting for 1m20s May 20 23:29:43.450: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:29:43.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:32 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:42 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:29:44.452: INFO: node status heartbeat is unchanged for 1.001501317s, waiting for 1m20s May 20 23:29:45.450: INFO: node status heartbeat is unchanged for 1.999560927s, waiting for 1m20s May 20 23:29:46.448: INFO: node status heartbeat is unchanged for 2.998253398s, waiting for 1m20s May 20 23:29:47.450: INFO: node status heartbeat is unchanged for 4.000262181s, waiting for 1m20s May 20 23:29:48.451: INFO: node status heartbeat is unchanged for 5.000916133s, waiting for 1m20s May 20 23:29:49.450: INFO: node status heartbeat is unchanged for 6.000060687s, waiting for 1m20s May 20 23:29:50.449: INFO: node status heartbeat is unchanged for 6.998973055s, waiting for 1m20s May 20 23:29:51.449: INFO: node status heartbeat is unchanged for 7.998673482s, waiting for 1m20s May 20 23:29:52.452: INFO: node status heartbeat is unchanged for 9.00196949s, waiting for 1m20s May 20 23:29:53.451: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:29:53.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:42 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:52 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:29:54.450: INFO: node status heartbeat is unchanged for 999.093359ms, waiting for 1m20s May 20 23:29:55.452: INFO: node status heartbeat is unchanged for 2.001130066s, waiting for 1m20s May 20 23:29:56.449: INFO: node status heartbeat is unchanged for 2.998572911s, waiting for 1m20s May 20 23:29:57.450: INFO: node status heartbeat is unchanged for 3.999665169s, waiting for 1m20s May 20 23:29:58.449: INFO: node status heartbeat is unchanged for 4.998859035s, waiting for 1m20s May 20 23:29:59.451: INFO: node status heartbeat is unchanged for 6.000839774s, waiting for 1m20s May 20 23:30:00.451: INFO: node status heartbeat is unchanged for 7.000314631s, waiting for 1m20s May 20 23:30:01.450: INFO: node status heartbeat is unchanged for 7.999128396s, waiting for 1m20s May 20 23:30:02.450: INFO: node status heartbeat is unchanged for 8.999539053s, waiting for 1m20s May 20 23:30:03.453: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:30:03.458: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:29:52 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:02 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:30:04.454: INFO: node status heartbeat is unchanged for 1.001129557s, waiting for 1m20s May 20 23:30:05.450: INFO: node status heartbeat is unchanged for 1.997165877s, waiting for 1m20s May 20 23:30:06.449: INFO: node status heartbeat is unchanged for 2.99563672s, waiting for 1m20s May 20 23:30:07.449: INFO: node status heartbeat is unchanged for 3.996170269s, waiting for 1m20s May 20 23:30:08.453: INFO: node status heartbeat is unchanged for 4.999545401s, waiting for 1m20s May 20 23:30:09.450: INFO: node status heartbeat is unchanged for 5.996391728s, waiting for 1m20s May 20 23:30:10.450: INFO: node status heartbeat is unchanged for 6.996783034s, waiting for 1m20s May 20 23:30:11.450: INFO: node status heartbeat is unchanged for 7.996654334s, waiting for 1m20s May 20 23:30:12.451: INFO: node status heartbeat is unchanged for 8.997471937s, waiting for 1m20s May 20 23:30:13.449: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 20 23:30:13.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:02 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:30:14.452: INFO: node status heartbeat is unchanged for 1.002617649s, waiting for 1m20s May 20 23:30:15.451: INFO: node status heartbeat is unchanged for 2.001908186s, waiting for 1m20s May 20 23:30:16.452: INFO: node status heartbeat is unchanged for 3.002783844s, waiting for 1m20s May 20 23:30:17.449: INFO: node status heartbeat is unchanged for 4.000105354s, waiting for 1m20s May 20 23:30:18.450: INFO: node status heartbeat is unchanged for 5.001373621s, waiting for 1m20s May 20 23:30:19.454: INFO: node status heartbeat is unchanged for 6.004972244s, waiting for 1m20s May 20 23:30:20.449: INFO: node status heartbeat is unchanged for 6.99958581s, waiting for 1m20s May 20 23:30:21.449: INFO: node status heartbeat is unchanged for 8.00042822s, waiting for 1m20s May 20 23:30:22.450: INFO: node status heartbeat is unchanged for 9.00070333s, waiting for 1m20s May 20 23:30:23.449: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:30:23.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:30:24.450: INFO: node status heartbeat is unchanged for 1.000861363s, waiting for 1m20s May 20 23:30:25.450: INFO: node status heartbeat is unchanged for 2.000152713s, waiting for 1m20s May 20 23:30:26.450: INFO: node status heartbeat is unchanged for 3.000415879s, waiting for 1m20s May 20 23:30:27.450: INFO: node status heartbeat is unchanged for 4.000295423s, waiting for 1m20s May 20 23:30:28.449: INFO: node status heartbeat is unchanged for 4.999402427s, waiting for 1m20s May 20 23:30:29.451: INFO: node status heartbeat is unchanged for 6.001244632s, waiting for 1m20s May 20 23:30:30.450: INFO: node status heartbeat is unchanged for 7.000858838s, waiting for 1m20s May 20 23:30:31.449: INFO: node status heartbeat is unchanged for 7.999865196s, waiting for 1m20s May 20 23:30:32.451: INFO: node status heartbeat is unchanged for 9.001774682s, waiting for 1m20s May 20 23:30:33.449: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:30:33.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:30:34.451: INFO: node status heartbeat is unchanged for 1.00158314s, waiting for 1m20s May 20 23:30:35.449: INFO: node status heartbeat is unchanged for 2.000204026s, waiting for 1m20s May 20 23:30:36.450: INFO: node status heartbeat is unchanged for 3.001125889s, waiting for 1m20s May 20 23:30:37.449: INFO: node status heartbeat is unchanged for 4.000051766s, waiting for 1m20s May 20 23:30:38.450: INFO: node status heartbeat is unchanged for 5.001182451s, waiting for 1m20s May 20 23:30:39.449: INFO: node status heartbeat is unchanged for 5.999960269s, waiting for 1m20s May 20 23:30:40.450: INFO: node status heartbeat is unchanged for 7.000314266s, waiting for 1m20s May 20 23:30:41.451: INFO: node status heartbeat is unchanged for 8.001405211s, waiting for 1m20s May 20 23:30:42.449: INFO: node status heartbeat is unchanged for 8.999638262s, waiting for 1m20s May 20 23:30:43.451: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:30:43.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:30:44.452: INFO: node status heartbeat is unchanged for 1.000866533s, waiting for 1m20s May 20 23:30:45.450: INFO: node status heartbeat is unchanged for 1.999021135s, waiting for 1m20s May 20 23:30:46.453: INFO: node status heartbeat is unchanged for 3.002097083s, waiting for 1m20s May 20 23:30:47.451: INFO: node status heartbeat is unchanged for 4.000648894s, waiting for 1m20s May 20 23:30:48.450: INFO: node status heartbeat is unchanged for 4.998814012s, waiting for 1m20s May 20 23:30:49.451: INFO: node status heartbeat is unchanged for 5.99999934s, waiting for 1m20s May 20 23:30:50.450: INFO: node status heartbeat is unchanged for 6.99915968s, waiting for 1m20s May 20 23:30:51.449: INFO: node status heartbeat is unchanged for 7.998187478s, waiting for 1m20s May 20 23:30:52.450: INFO: node status heartbeat is unchanged for 8.999109513s, waiting for 1m20s May 20 23:30:53.450: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:30:53.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:30:54.453: INFO: node status heartbeat is unchanged for 1.003354705s, waiting for 1m20s May 20 23:30:55.451: INFO: node status heartbeat is unchanged for 2.001735376s, waiting for 1m20s May 20 23:30:56.451: INFO: node status heartbeat is unchanged for 3.001021582s, waiting for 1m20s May 20 23:30:57.450: INFO: node status heartbeat is unchanged for 4.000766631s, waiting for 1m20s May 20 23:30:58.451: INFO: node status heartbeat is unchanged for 5.001035869s, waiting for 1m20s May 20 23:30:59.452: INFO: node status heartbeat is unchanged for 6.002290439s, waiting for 1m20s May 20 23:31:00.450: INFO: node status heartbeat is unchanged for 7.000240335s, waiting for 1m20s May 20 23:31:01.450: INFO: node status heartbeat is unchanged for 8.000179809s, waiting for 1m20s May 20 23:31:02.452: INFO: node status heartbeat is unchanged for 9.002390381s, waiting for 1m20s May 20 23:31:03.451: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:31:03.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:30:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:31:04.452: INFO: node status heartbeat is unchanged for 1.00130005s, waiting for 1m20s May 20 23:31:05.450: INFO: node status heartbeat is unchanged for 1.999921079s, waiting for 1m20s May 20 23:31:06.453: INFO: node status heartbeat is unchanged for 3.002128396s, waiting for 1m20s May 20 23:31:07.451: INFO: node status heartbeat is unchanged for 4.000517153s, waiting for 1m20s May 20 23:31:08.449: INFO: node status heartbeat is unchanged for 4.998550722s, waiting for 1m20s May 20 23:31:09.449: INFO: node status heartbeat is unchanged for 5.998899988s, waiting for 1m20s May 20 23:31:10.449: INFO: node status heartbeat is unchanged for 6.998976719s, waiting for 1m20s May 20 23:31:11.452: INFO: node status heartbeat is unchanged for 8.001778168s, waiting for 1m20s May 20 23:31:12.452: INFO: node status heartbeat is unchanged for 9.001500599s, waiting for 1m20s May 20 23:31:13.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:31:13.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:31:14.451: INFO: node status heartbeat is unchanged for 1.000007829s, waiting for 1m20s May 20 23:31:15.449: INFO: node status heartbeat is unchanged for 1.99778843s, waiting for 1m20s May 20 23:31:16.450: INFO: node status heartbeat is unchanged for 2.998399993s, waiting for 1m20s May 20 23:31:17.450: INFO: node status heartbeat is unchanged for 3.998630718s, waiting for 1m20s May 20 23:31:18.450: INFO: node status heartbeat is unchanged for 4.998137681s, waiting for 1m20s May 20 23:31:19.451: INFO: node status heartbeat is unchanged for 5.999409984s, waiting for 1m20s May 20 23:31:20.450: INFO: node status heartbeat is unchanged for 6.998364858s, waiting for 1m20s May 20 23:31:21.450: INFO: node status heartbeat is unchanged for 7.998513234s, waiting for 1m20s May 20 23:31:22.450: INFO: node status heartbeat is unchanged for 8.998128988s, waiting for 1m20s May 20 23:31:23.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:31:23.457: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:31:24.451: INFO: node status heartbeat is unchanged for 999.060265ms, waiting for 1m20s May 20 23:31:25.451: INFO: node status heartbeat is unchanged for 1.999169661s, waiting for 1m20s May 20 23:31:26.450: INFO: node status heartbeat is unchanged for 2.998627136s, waiting for 1m20s May 20 23:31:27.452: INFO: node status heartbeat is unchanged for 4.000074472s, waiting for 1m20s May 20 23:31:28.451: INFO: node status heartbeat is unchanged for 4.999347899s, waiting for 1m20s May 20 23:31:29.451: INFO: node status heartbeat is unchanged for 5.99873041s, waiting for 1m20s May 20 23:31:30.449: INFO: node status heartbeat is unchanged for 6.997598814s, waiting for 1m20s May 20 23:31:31.451: INFO: node status heartbeat is unchanged for 7.998772141s, waiting for 1m20s May 20 23:31:32.451: INFO: node status heartbeat is unchanged for 8.998939447s, waiting for 1m20s May 20 23:31:33.449: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:31:33.454: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:31:34.453: INFO: node status heartbeat is unchanged for 1.003595394s, waiting for 1m20s May 20 23:31:35.450: INFO: node status heartbeat is unchanged for 2.001059735s, waiting for 1m20s May 20 23:31:36.449: INFO: node status heartbeat is unchanged for 3.000176522s, waiting for 1m20s May 20 23:31:37.449: INFO: node status heartbeat is unchanged for 3.999749744s, waiting for 1m20s May 20 23:31:38.450: INFO: node status heartbeat is unchanged for 5.001003103s, waiting for 1m20s May 20 23:31:39.452: INFO: node status heartbeat is unchanged for 6.00310739s, waiting for 1m20s May 20 23:31:40.450: INFO: node status heartbeat is unchanged for 7.001035901s, waiting for 1m20s May 20 23:31:41.450: INFO: node status heartbeat is unchanged for 8.000955333s, waiting for 1m20s May 20 23:31:42.450: INFO: node status heartbeat is unchanged for 9.000916114s, waiting for 1m20s May 20 23:31:43.449: INFO: node status heartbeat is unchanged for 9.999897872s, waiting for 1m20s May 20 23:31:44.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:31:44.457: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:31:45.449: INFO: node status heartbeat is unchanged for 997.38239ms, waiting for 1m20s May 20 23:31:46.449: INFO: node status heartbeat is unchanged for 1.99749345s, waiting for 1m20s May 20 23:31:47.450: INFO: node status heartbeat is unchanged for 2.997753422s, waiting for 1m20s May 20 23:31:48.451: INFO: node status heartbeat is unchanged for 3.998818029s, waiting for 1m20s May 20 23:31:49.449: INFO: node status heartbeat is unchanged for 4.997309624s, waiting for 1m20s May 20 23:31:50.450: INFO: node status heartbeat is unchanged for 5.998373304s, waiting for 1m20s May 20 23:31:51.451: INFO: node status heartbeat is unchanged for 6.999513713s, waiting for 1m20s May 20 23:31:52.452: INFO: node status heartbeat is unchanged for 7.999864637s, waiting for 1m20s May 20 23:31:53.450: INFO: node status heartbeat is unchanged for 8.998070352s, waiting for 1m20s May 20 23:31:54.451: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:31:54.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:31:55.450: INFO: node status heartbeat is unchanged for 999.954735ms, waiting for 1m20s May 20 23:31:56.451: INFO: node status heartbeat is unchanged for 2.000398404s, waiting for 1m20s May 20 23:31:57.450: INFO: node status heartbeat is unchanged for 2.999395799s, waiting for 1m20s May 20 23:31:58.450: INFO: node status heartbeat is unchanged for 3.999054645s, waiting for 1m20s May 20 23:31:59.452: INFO: node status heartbeat is unchanged for 5.001666127s, waiting for 1m20s May 20 23:32:00.450: INFO: node status heartbeat is unchanged for 5.999009047s, waiting for 1m20s May 20 23:32:01.452: INFO: node status heartbeat is unchanged for 7.001318416s, waiting for 1m20s May 20 23:32:02.452: INFO: node status heartbeat is unchanged for 8.001001391s, waiting for 1m20s May 20 23:32:03.452: INFO: node status heartbeat is unchanged for 9.001432287s, waiting for 1m20s May 20 23:32:04.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:32:04.457: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:31:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:32:05.450: INFO: node status heartbeat is unchanged for 997.796633ms, waiting for 1m20s May 20 23:32:06.453: INFO: node status heartbeat is unchanged for 2.001155029s, waiting for 1m20s May 20 23:32:07.451: INFO: node status heartbeat is unchanged for 2.999724623s, waiting for 1m20s May 20 23:32:08.451: INFO: node status heartbeat is unchanged for 3.999260203s, waiting for 1m20s May 20 23:32:09.451: INFO: node status heartbeat is unchanged for 4.999009551s, waiting for 1m20s May 20 23:32:10.450: INFO: node status heartbeat is unchanged for 5.997989748s, waiting for 1m20s May 20 23:32:11.452: INFO: node status heartbeat is unchanged for 7.000192091s, waiting for 1m20s May 20 23:32:12.451: INFO: node status heartbeat is unchanged for 7.999484055s, waiting for 1m20s May 20 23:32:13.450: INFO: node status heartbeat is unchanged for 8.998344204s, waiting for 1m20s May 20 23:32:14.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:32:14.457: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:32:15.451: INFO: node status heartbeat is unchanged for 999.317069ms, waiting for 1m20s May 20 23:32:16.451: INFO: node status heartbeat is unchanged for 1.999367678s, waiting for 1m20s May 20 23:32:17.450: INFO: node status heartbeat is unchanged for 2.998735859s, waiting for 1m20s May 20 23:32:18.453: INFO: node status heartbeat is unchanged for 4.000812607s, waiting for 1m20s May 20 23:32:19.451: INFO: node status heartbeat is unchanged for 4.999303638s, waiting for 1m20s May 20 23:32:20.451: INFO: node status heartbeat is unchanged for 5.998997101s, waiting for 1m20s May 20 23:32:21.450: INFO: node status heartbeat is unchanged for 6.998651766s, waiting for 1m20s May 20 23:32:22.454: INFO: node status heartbeat is unchanged for 8.001888188s, waiting for 1m20s May 20 23:32:23.453: INFO: node status heartbeat is unchanged for 9.001048706s, waiting for 1m20s May 20 23:32:24.451: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:32:24.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:23 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:32:25.450: INFO: node status heartbeat is unchanged for 998.988427ms, waiting for 1m20s May 20 23:32:26.451: INFO: node status heartbeat is unchanged for 2.00004793s, waiting for 1m20s May 20 23:32:27.453: INFO: node status heartbeat is unchanged for 3.002175005s, waiting for 1m20s May 20 23:32:28.452: INFO: node status heartbeat is unchanged for 4.001686713s, waiting for 1m20s May 20 23:32:29.453: INFO: node status heartbeat is unchanged for 5.002159485s, waiting for 1m20s May 20 23:32:30.451: INFO: node status heartbeat is unchanged for 6.000394298s, waiting for 1m20s May 20 23:32:31.450: INFO: node status heartbeat is unchanged for 6.999539519s, waiting for 1m20s May 20 23:32:32.454: INFO: node status heartbeat is unchanged for 8.002983873s, waiting for 1m20s May 20 23:32:33.450: INFO: node status heartbeat is unchanged for 8.999937569s, waiting for 1m20s May 20 23:32:34.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:32:34.457: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:23 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:33 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:32:35.451: INFO: node status heartbeat is unchanged for 999.441136ms, waiting for 1m20s May 20 23:32:36.454: INFO: node status heartbeat is unchanged for 2.002070634s, waiting for 1m20s May 20 23:32:37.450: INFO: node status heartbeat is unchanged for 2.997783002s, waiting for 1m20s May 20 23:32:38.450: INFO: node status heartbeat is unchanged for 3.997566272s, waiting for 1m20s May 20 23:32:39.453: INFO: node status heartbeat is unchanged for 5.000666126s, waiting for 1m20s May 20 23:32:40.451: INFO: node status heartbeat is unchanged for 5.99883387s, waiting for 1m20s May 20 23:32:41.450: INFO: node status heartbeat is unchanged for 6.997938207s, waiting for 1m20s May 20 23:32:42.451: INFO: node status heartbeat is unchanged for 7.999468349s, waiting for 1m20s May 20 23:32:43.481: INFO: node status heartbeat is unchanged for 9.028629591s, waiting for 1m20s May 20 23:32:44.454: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:32:44.459: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:33 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:43 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:32:45.451: INFO: node status heartbeat is unchanged for 997.4094ms, waiting for 1m20s May 20 23:32:46.450: INFO: node status heartbeat is unchanged for 1.996794514s, waiting for 1m20s May 20 23:32:47.449: INFO: node status heartbeat is unchanged for 2.995166528s, waiting for 1m20s May 20 23:32:48.454: INFO: node status heartbeat is unchanged for 4.000043051s, waiting for 1m20s May 20 23:32:49.451: INFO: node status heartbeat is unchanged for 4.996941321s, waiting for 1m20s May 20 23:32:50.450: INFO: node status heartbeat is unchanged for 5.9960079s, waiting for 1m20s May 20 23:32:51.452: INFO: node status heartbeat is unchanged for 6.998617648s, waiting for 1m20s May 20 23:32:52.453: INFO: node status heartbeat is unchanged for 7.999619356s, waiting for 1m20s May 20 23:32:53.450: INFO: node status heartbeat is unchanged for 8.996245081s, waiting for 1m20s May 20 23:32:54.452: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:32:54.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:43 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:53 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:32:55.451: INFO: node status heartbeat is unchanged for 999.267833ms, waiting for 1m20s May 20 23:32:56.451: INFO: node status heartbeat is unchanged for 1.999546696s, waiting for 1m20s May 20 23:32:57.450: INFO: node status heartbeat is unchanged for 2.998442414s, waiting for 1m20s May 20 23:32:58.451: INFO: node status heartbeat is unchanged for 3.999288258s, waiting for 1m20s May 20 23:32:59.451: INFO: node status heartbeat is unchanged for 4.999757271s, waiting for 1m20s May 20 23:33:00.450: INFO: node status heartbeat is unchanged for 5.998117961s, waiting for 1m20s May 20 23:33:01.449: INFO: node status heartbeat is unchanged for 6.997821978s, waiting for 1m20s May 20 23:33:02.450: INFO: node status heartbeat is unchanged for 7.998873442s, waiting for 1m20s May 20 23:33:03.452: INFO: node status heartbeat is unchanged for 9.000423891s, waiting for 1m20s May 20 23:33:04.451: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:33:04.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:32:53 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:33:05.449: INFO: node status heartbeat is unchanged for 997.926352ms, waiting for 1m20s May 20 23:33:06.449: INFO: node status heartbeat is unchanged for 1.998450755s, waiting for 1m20s May 20 23:33:07.451: INFO: node status heartbeat is unchanged for 3.000426468s, waiting for 1m20s May 20 23:33:08.454: INFO: node status heartbeat is unchanged for 4.003445572s, waiting for 1m20s May 20 23:33:09.452: INFO: node status heartbeat is unchanged for 5.001601283s, waiting for 1m20s May 20 23:33:10.450: INFO: node status heartbeat is unchanged for 5.99951561s, waiting for 1m20s May 20 23:33:11.452: INFO: node status heartbeat is unchanged for 7.00067786s, waiting for 1m20s May 20 23:33:12.451: INFO: node status heartbeat is unchanged for 7.999993588s, waiting for 1m20s May 20 23:33:13.451: INFO: node status heartbeat is unchanged for 8.999983066s, waiting for 1m20s May 20 23:33:14.450: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:33:14.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:03 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:13 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:33:15.451: INFO: node status heartbeat is unchanged for 1.001430361s, waiting for 1m20s May 20 23:33:16.452: INFO: node status heartbeat is unchanged for 2.002355439s, waiting for 1m20s May 20 23:33:17.452: INFO: node status heartbeat is unchanged for 3.001810455s, waiting for 1m20s May 20 23:33:18.453: INFO: node status heartbeat is unchanged for 4.002828737s, waiting for 1m20s May 20 23:33:19.451: INFO: node status heartbeat is unchanged for 5.001531358s, waiting for 1m20s May 20 23:33:20.449: INFO: node status heartbeat is unchanged for 5.999254409s, waiting for 1m20s May 20 23:33:21.452: INFO: node status heartbeat is unchanged for 7.002005284s, waiting for 1m20s May 20 23:33:22.451: INFO: node status heartbeat is unchanged for 8.001621344s, waiting for 1m20s May 20 23:33:23.452: INFO: node status heartbeat is unchanged for 9.002106893s, waiting for 1m20s May 20 23:33:24.452: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 20 23:33:24.456: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:24 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:24 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:13 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:24 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:33:25.451: INFO: node status heartbeat is unchanged for 999.235327ms, waiting for 1m20s May 20 23:33:26.452: INFO: node status heartbeat is unchanged for 2.000750353s, waiting for 1m20s May 20 23:33:27.452: INFO: node status heartbeat is unchanged for 3.000484318s, waiting for 1m20s May 20 23:33:28.452: INFO: node status heartbeat is unchanged for 3.99995788s, waiting for 1m20s May 20 23:33:29.451: INFO: node status heartbeat is unchanged for 4.999128355s, waiting for 1m20s May 20 23:33:30.451: INFO: node status heartbeat is unchanged for 5.999358153s, waiting for 1m20s May 20 23:33:31.451: INFO: node status heartbeat is unchanged for 6.999164459s, waiting for 1m20s May 20 23:33:32.451: INFO: node status heartbeat is unchanged for 7.999116983s, waiting for 1m20s May 20 23:33:33.450: INFO: node status heartbeat is unchanged for 8.998475883s, waiting for 1m20s May 20 23:33:34.450: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 20 23:33:34.455: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"example.com/fakecpu": {i: {...}, s: "1k", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:07:03 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:24 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:34 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:24 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:34 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:24 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-20 23:33:34 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-20 20:03:10 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-20 20:04:16 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.207"}, {Type: "Hostname", Address: "node1"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 20 23:33:35.456: INFO: node status heartbeat is unchanged for 1.006164032s, waiting for 1m20s May 20 23:33:36.450: INFO: node status heartbeat is unchanged for 2.000550578s, waiting for 1m20s May 20 23:33:37.449: INFO: node status heartbeat is unchanged for 2.998934587s, waiting for 1m20s May 20 23:33:37.451: INFO: node status heartbeat is unchanged for 3.001638662s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:33:37.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3998" for this suite. • [SLOW TEST:300.054 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":5,"skipped":657,"failed":0} May 20 23:33:37.474: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 20 23:27:42.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 May 20 23:27:42.324: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:44.329: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:46.329: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:48.330: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:50.328: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:52.328: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:54.329: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:56.330: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 20 23:27:58.329: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped May 20 23:39:10.662: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-05-20 23:34:02 +0000 UTC restartedAt=2022-05-20 23:39:09 +0000 UTC (5m7s) May 20 23:44:21.109: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-05-20 23:39:14 +0000 UTC restartedAt=2022-05-20 23:44:18 +0000 UTC (5m4s) May 20 23:49:37.594: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-05-20 23:44:23 +0000 UTC restartedAt=2022-05-20 23:49:37 +0000 UTC (5m14s) STEP: getting restart delay after a capped delay May 20 23:54:49.046: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-05-20 23:49:42 +0000 UTC restartedAt=2022-05-20 23:54:47 +0000 UTC (5m5s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 20 23:54:49.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1685" for this suite. • [SLOW TEST:1626.769 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":2,"skipped":253,"failed":0} May 20 23:54:49.060: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":7,"skipped":1503,"failed":0} May 20 23:30:02.412: INFO: Running AfterSuite actions on all nodes May 20 23:54:49.112: INFO: Running AfterSuite actions on node 1 May 20 23:54:49.112: INFO: Skipping dumping logs from cluster Ran 53 of 5773 Specs in 1627.742 seconds SUCCESS! -- 53 Passed | 0 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m9.32905202s Test Suite Passed