Running Suite: Kubernetes e2e suite =================================== Random Seed: 1652484276 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 13 23:24:37.807: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.809: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 13 23:24:37.835: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 13 23:24:37.913: INFO: The status of Pod cmk-init-discover-node1-m2p59 is Succeeded, skipping waiting May 13 23:24:37.913: INFO: The status of Pod cmk-init-discover-node2-hm7r7 is Succeeded, skipping waiting May 13 23:24:37.913: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 13 23:24:37.913: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 13 23:24:37.913: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 13 23:24:37.931: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 13 23:24:37.931: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 13 23:24:37.931: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 13 23:24:37.931: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 13 23:24:37.931: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 13 23:24:37.931: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 13 23:24:37.931: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 13 23:24:37.931: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 13 23:24:37.931: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 13 23:24:37.931: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 13 23:24:37.931: INFO: e2e test version: v1.21.9 May 13 23:24:37.932: INFO: kube-apiserver version: v1.21.1 May 13 23:24:37.933: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.938: INFO: Cluster IP family: ipv4 SSSSSS ------------------------------ May 13 23:24:37.935: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.957: INFO: Cluster IP family: ipv4 May 13 23:24:37.937: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.958: INFO: Cluster IP family: ipv4 S ------------------------------ May 13 23:24:37.939: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.960: INFO: Cluster IP family: ipv4 SSSSSSSSSS ------------------------------ May 13 23:24:37.949: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.971: INFO: Cluster IP family: ipv4 SSSSSSSS ------------------------------ May 13 23:24:37.955: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.977: INFO: Cluster IP family: ipv4 May 13 23:24:37.956: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.978: INFO: Cluster IP family: ipv4 May 13 23:24:37.957: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.978: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ May 13 23:24:37.960: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.983: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ May 13 23:24:37.961: INFO: >>> kubeConfig: /root/.kube/config May 13 23:24:37.987: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:37.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context W0513 23:24:38.001605 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.001: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.005: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups May 13 23:24:38.019: INFO: Waiting up to 5m0s for pod "security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52" in namespace "security-context-1390" to be "Succeeded or Failed" May 13 23:24:38.021: INFO: Pod "security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204585ms May 13 23:24:40.027: INFO: Pod "security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007998524s May 13 23:24:42.032: INFO: Pod "security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012658359s STEP: Saw pod success May 13 23:24:42.032: INFO: Pod "security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52" satisfied condition "Succeeded or Failed" May 13 23:24:42.035: INFO: Trying to get logs from node node1 pod security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52 container test-container: STEP: delete the pod May 13 23:24:42.046: INFO: Waiting for pod security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52 to disappear May 13 23:24:42.048: INFO: Pod security-context-357b5b41-a29a-43fb-a22e-1cc247b20e52 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:42.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1390" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0513 23:24:38.092884 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.093: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.094: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 May 13 23:24:38.107: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef" in namespace "security-context-test-6887" to be "Succeeded or Failed" May 13 23:24:38.111: INFO: Pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.161268ms May 13 23:24:40.115: INFO: Pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007543307s May 13 23:24:42.118: INFO: Pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010160271s May 13 23:24:44.121: INFO: Pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013836452s May 13 23:24:46.125: INFO: Pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef": Phase="Failed", Reason="", readiness=false. Elapsed: 8.017282249s May 13 23:24:46.125: INFO: Pod "busybox-readonly-true-8902b640-bd60-4a39-b800-425844807fef" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:46.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6887" for this suite. • [SLOW TEST:8.063 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":25,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass W0513 23:24:38.041066 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.041: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.043: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a label on the found node. STEP: verifying the node has the label foo-c807d026-5016-4bef-b5c4-b6706056cd24 bar STEP: verifying the node has the label fizz-51b851bc-7734-4591-80e6-ff99fe570091 buzz STEP: Trying to create runtimeclass and pod STEP: removing the label fizz-51b851bc-7734-4591-80e6-ff99fe570091 off the node node1 STEP: verifying the node doesn't have the label fizz-51b851bc-7734-4591-80e6-ff99fe570091 STEP: removing the label foo-c807d026-5016-4bef-b5c4-b6706056cd24 off the node node1 STEP: verifying the node doesn't have the label foo-c807d026-5016-4bef-b5c4-b6706056cd24 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:46.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-8374" for this suite. • [SLOW TEST:8.128 seconds] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run a Pod requesting a RuntimeClass with scheduling without taints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:125 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints ","total":-1,"completed":1,"skipped":20,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0513 23:24:38.023353 33 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.023: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.025: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 May 13 23:24:38.038: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-4414" to be "Succeeded or Failed" May 13 23:24:38.040: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118116ms May 13 23:24:40.043: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005893953s May 13 23:24:42.048: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010163116s May 13 23:24:44.053: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015862792s May 13 23:24:46.058: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.019968205s May 13 23:24:48.061: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.023419709s May 13 23:24:48.061: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:48.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4414" for this suite. • [SLOW TEST:10.084 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":1,"skipped":2,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:42.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 May 13 23:24:42.216: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7" in namespace "security-context-test-7215" to be "Succeeded or Failed" May 13 23:24:42.218: INFO: Pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213577ms May 13 23:24:44.224: INFO: Pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008061731s May 13 23:24:46.227: INFO: Pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010926342s May 13 23:24:48.230: INFO: Pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014075943s May 13 23:24:48.230: INFO: Pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7" satisfied condition "Succeeded or Failed" May 13 23:24:48.235: INFO: Got logs for pod "busybox-privileged-true-eb57e55f-2b77-4273-adc0-309abcdf0cb7": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:48.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7215" for this suite. • [SLOW TEST:6.058 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":2,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:48.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 May 13 23:24:48.274: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:48.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-6056" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.033 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0513 23:24:38.389373 35 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.389: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.391: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:48.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7131" for this suite. • [SLOW TEST:10.047 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ SSS ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":-1,"completed":1,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:48.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 13 23:24:48.527: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:48.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-9778" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test W0513 23:24:38.524065 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.524: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.525: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 May 13 23:24:38.539: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25" in namespace "security-context-test-3942" to be "Succeeded or Failed" May 13 23:24:38.541: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116076ms May 13 23:24:40.546: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007091726s May 13 23:24:42.550: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01136429s May 13 23:24:44.554: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01516778s May 13 23:24:46.558: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018934801s May 13 23:24:48.560: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021699554s May 13 23:24:50.565: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.026804155s May 13 23:24:50.566: INFO: Pod "alpine-nnp-nil-023878fb-6261-4340-b0a2-943ee0e55a25" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:50.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3942" for this suite. • [SLOW TEST:12.079 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":1,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:48.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:51.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3118" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":2,"skipped":189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:48.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 May 13 23:24:48.572: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-8917" to be "Succeeded or Failed" May 13 23:24:48.574: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179112ms May 13 23:24:50.577: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005600232s May 13 23:24:52.581: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009625081s May 13 23:24:54.585: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01320062s May 13 23:24:56.589: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.016943361s May 13 23:24:56.589: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:56.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8917" for this suite. • [SLOW TEST:8.062 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":2,"skipped":207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:48.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes May 13 23:24:48.816: INFO: Waiting up to 5m0s for pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc" in namespace "pods-6746" to be "Succeeded or Failed" May 13 23:24:48.820: INFO: Pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551415ms May 13 23:24:50.823: INFO: Pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006422201s May 13 23:24:52.826: INFO: Pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009561548s May 13 23:24:54.829: INFO: Pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013210254s May 13 23:24:56.832: INFO: Pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.015430068s STEP: Saw pod success May 13 23:24:56.832: INFO: Pod "pod-always-succeed19db82f1-a52e-48f1-ab7b-7980ca53dfcc" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:58.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6746" for this suite. • [SLOW TEST:10.070 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":3,"skipped":315,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:56.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars May 13 23:24:56.827: INFO: Waiting up to 5m0s for pod "downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f" in namespace "downward-api-1805" to be "Succeeded or Failed" May 13 23:24:56.829: INFO: Pod "downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085174ms May 13 23:24:58.833: INFO: Pod "downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005717487s STEP: Saw pod success May 13 23:24:58.833: INFO: Pod "downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f" satisfied condition "Succeeded or Failed" May 13 23:24:58.835: INFO: Trying to get logs from node node1 pod downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f container dapi-container: STEP: delete the pod May 13 23:24:58.846: INFO: Waiting for pod downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f to disappear May 13 23:24:58.847: INFO: Pod downward-api-8b75c426-9cbf-4d3b-8905-88b478d6b96f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:58.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1805" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":3,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:51.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 13 23:24:51.887: INFO: Waiting up to 5m0s for pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92" in namespace "security-context-7123" to be "Succeeded or Failed" May 13 23:24:51.889: INFO: Pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191314ms May 13 23:24:53.895: INFO: Pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007578944s May 13 23:24:55.898: INFO: Pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011351876s May 13 23:24:57.902: INFO: Pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015075673s May 13 23:24:59.907: INFO: Pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020223136s STEP: Saw pod success May 13 23:24:59.907: INFO: Pod "security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92" satisfied condition "Succeeded or Failed" May 13 23:24:59.910: INFO: Trying to get logs from node node2 pod security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92 container test-container: STEP: delete the pod May 13 23:24:59.922: INFO: Waiting for pod security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92 to disappear May 13 23:24:59.924: INFO: Pod security-context-c2e760c6-d4ae-4485-90b2-b7170a1e7c92 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:24:59.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7123" for this suite. • [SLOW TEST:8.079 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:00.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 May 13 23:25:00.170: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:00.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-7738" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.034 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0513 23:24:38.368066 39 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.368: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.369: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 13 23:24:55.434: INFO: start=2022-05-13 23:24:50.402147882 +0000 UTC m=+14.205299965, now=2022-05-13 23:24:55.434854354 +0000 UTC m=+19.238006485, kubelet pod: {"metadata":{"name":"pod-submit-remove-3300c654-99b6-4e4f-919e-56150b5edd82","namespace":"pods-5547","uid":"fd589108-c7a1-4347-bab8-6eb20ea68c46","resourceVersion":"76776","creationTimestamp":"2022-05-13T23:24:38Z","deletionTimestamp":"2022-05-13T23:25:20Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"372066154"},"annotations":{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.97\"\n ],\n \"mac\": \"b6:f3:dd:9c:7e:73\",\n \"default\": true,\n \"dns\": {}\n}]","k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.4.97\"\n ],\n \"mac\": \"b6:f3:dd:9c:7e:73\",\n \"default\": true,\n \"dns\": {}\n}]","kubernetes.io/config.seen":"2022-05-13T23:24:38.385985728Z","kubernetes.io/config.source":"api","kubernetes.io/psp":"collectd"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2022-05-13T23:24:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-dqbxs","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","args":["pause"],"resources":{},"volumeMounts":[{"name":"kube-api-access-dqbxs","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"Always","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-05-13T23:24:38Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-05-13T23:24:51Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2022-05-13T23:24:51Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2022-05-13T23:24:38Z"}],"hostIP":"10.10.190.208","podIP":"10.244.4.97","podIPs":[{"ip":"10.244.4.97"}],"startTime":"2022-05-13T23:24:38Z","containerStatuses":[{"name":"agnhost-container","state":{"terminated":{"exitCode":2,"reason":"Error","startedAt":"2022-05-13T23:24:49Z","finishedAt":"2022-05-13T23:24:50Z","containerID":"docker://d62ee901dc72843f77a1076d3e39c6f90a8f567c51a628dbe42bc4db8129fecf"}},"lastState":{},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.32","imageID":"docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1","containerID":"docker://d62ee901dc72843f77a1076d3e39c6f90a8f567c51a628dbe42bc4db8129fecf","started":false}],"qosClass":"BestEffort"}} May 13 23:25:00.451: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:00.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5547" for this suite. • [SLOW TEST:22.121 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":-1,"completed":1,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:59.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:04.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3579" for this suite. • [SLOW TEST:5.079 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":-1,"completed":4,"skipped":625,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:01.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 May 13 23:25:01.049: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-f777ecfb-cdd5-4f81-b63c-e591802e8b10" in namespace "security-context-test-1594" to be "Succeeded or Failed" May 13 23:25:01.050: INFO: Pod "alpine-nnp-true-f777ecfb-cdd5-4f81-b63c-e591802e8b10": Phase="Pending", Reason="", readiness=false. Elapsed: 1.819822ms May 13 23:25:03.053: INFO: Pod "alpine-nnp-true-f777ecfb-cdd5-4f81-b63c-e591802e8b10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004138173s May 13 23:25:05.058: INFO: Pod "alpine-nnp-true-f777ecfb-cdd5-4f81-b63c-e591802e8b10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009389965s May 13 23:25:05.058: INFO: Pod "alpine-nnp-true-f777ecfb-cdd5-4f81-b63c-e591802e8b10" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:05.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1594" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:05.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-4976/configmap-test-af839539-81b4-46a5-ab2f-b799c41c15fd STEP: Updating configMap configmap-4976/configmap-test-af839539-81b4-46a5-ab2f-b799c41c15fd STEP: Verifying update of ConfigMap configmap-4976/configmap-test-af839539-81b4-46a5-ab2f-b799c41c15fd [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:05.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4976" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":3,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:59.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 May 13 23:24:59.040: INFO: Waiting up to 5m0s for pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb" in namespace "security-context-test-2909" to be "Succeeded or Failed" May 13 23:24:59.042: INFO: Pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051045ms May 13 23:25:01.047: INFO: Pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006600677s May 13 23:25:03.050: INFO: Pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010149914s May 13 23:25:05.054: INFO: Pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013804292s May 13 23:25:07.058: INFO: Pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018269802s May 13 23:25:07.058: INFO: Pod "busybox-user-0-a3ee474c-4b3c-472c-8135-55a0cbffacbb" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:07.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2909" for this suite. • [SLOW TEST:8.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:05.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 13 23:25:09.882: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:09.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7103" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":4,"skipped":760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:10.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 May 13 23:25:10.064: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:10.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-8649" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:07.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 13 23:25:07.205: INFO: Waiting up to 5m0s for pod "security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64" in namespace "security-context-9924" to be "Succeeded or Failed" May 13 23:25:07.207: INFO: Pod "security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117547ms May 13 23:25:09.212: INFO: Pod "security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006856438s May 13 23:25:11.216: INFO: Pod "security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010739501s May 13 23:25:13.218: INFO: Pod "security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013351243s STEP: Saw pod success May 13 23:25:13.218: INFO: Pod "security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64" satisfied condition "Succeeded or Failed" May 13 23:25:13.221: INFO: Trying to get logs from node node2 pod security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64 container test-container: STEP: delete the pod May 13 23:25:13.236: INFO: Waiting for pod security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64 to disappear May 13 23:25:13.237: INFO: Pod security-context-524b02d0-db3e-40ea-9fce-6fc4b36fdf64 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:13.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-9924" for this suite. • [SLOW TEST:6.074 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":-1,"completed":5,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:46.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 May 13 23:24:46.262: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:48.265: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:50.270: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:52.266: INFO: The status of Pod master is Running (Ready = true) May 13 23:24:52.283: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:54.289: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:56.288: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:58.287: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:00.288: INFO: The status of Pod slave is Running (Ready = true) May 13 23:25:00.301: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:02.305: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:04.308: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:06.305: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:08.306: INFO: The status of Pod private is Running (Ready = true) May 13 23:25:08.321: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:10.325: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:12.325: INFO: The status of Pod default is Running (Ready = true) May 13 23:25:12.330: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:12.330: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:12.432: INFO: Exec stderr: "" May 13 23:25:12.435: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:12.435: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:12.585: INFO: Exec stderr: "" May 13 23:25:12.588: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:12.588: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:12.725: INFO: Exec stderr: "" May 13 23:25:12.727: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:12.727: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:12.839: INFO: Exec stderr: "" May 13 23:25:12.841: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:12.842: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:12.923: INFO: Exec stderr: "" May 13 23:25:12.926: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:12.926: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.028: INFO: Exec stderr: "" May 13 23:25:13.031: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.031: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.113: INFO: Exec stderr: "" May 13 23:25:13.115: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.115: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.197: INFO: Exec stderr: "" May 13 23:25:13.200: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.200: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.280: INFO: Exec stderr: "" May 13 23:25:13.283: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.283: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.367: INFO: Exec stderr: "" May 13 23:25:13.370: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.370: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.456: INFO: Exec stderr: "" May 13 23:25:13.459: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.459: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.546: INFO: Exec stderr: "" May 13 23:25:13.548: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.548: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.631: INFO: Exec stderr: "" May 13 23:25:13.634: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.634: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.748: INFO: Exec stderr: "" May 13 23:25:13.750: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.750: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.829: INFO: Exec stderr: "" May 13 23:25:13.831: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.831: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:13.938: INFO: Exec stderr: "" May 13 23:25:13.941: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:13.941: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:14.020: INFO: Exec stderr: "" May 13 23:25:14.022: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:14.022: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:14.140: INFO: Exec stderr: "" May 13 23:25:14.145: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:14.145: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:14.233: INFO: Exec stderr: "" May 13 23:25:14.235: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:14.235: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:14.433: INFO: Exec stderr: "" May 13 23:25:16.452: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-7725"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-7725"/host; echo host > "/var/lib/kubelet/mount-propagation-7725"/host/file] Namespace:mount-propagation-7725 PodName:hostexec-node2-4z5zt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 13 23:25:16.452: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:16.546: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:16.547: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:16.628: INFO: pod master mount master: stdout: "master", stderr: "" error: May 13 23:25:16.630: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:16.630: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:16.713: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:16.715: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:16.715: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:16.808: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:16.811: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:16.811: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:16.912: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:16.915: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:16.915: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:16.999: INFO: pod master mount host: stdout: "host", stderr: "" error: May 13 23:25:17.001: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.001: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.103: INFO: pod slave mount master: stdout: "master", stderr: "" error: May 13 23:25:17.106: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.106: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.192: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: May 13 23:25:17.194: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.194: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.278: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:17.281: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.281: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.373: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:17.376: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.376: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.481: INFO: pod slave mount host: stdout: "host", stderr: "" error: May 13 23:25:17.483: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.483: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.569: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:17.572: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.572: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.654: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:17.657: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.657: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.739: INFO: pod private mount private: stdout: "private", stderr: "" error: May 13 23:25:17.742: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.742: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.846: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:17.849: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.849: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:17.931: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:17.934: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:17.934: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.044: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:18.047: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.047: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.140: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:18.142: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.142: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.221: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:18.224: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.224: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.331: INFO: pod default mount default: stdout: "default", stderr: "" error: May 13 23:25:18.333: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.333: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.418: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 May 13 23:25:18.418: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-7725"/master/file` = master] Namespace:mount-propagation-7725 PodName:hostexec-node2-4z5zt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 13 23:25:18.418: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.513: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-7725"/slave/file] Namespace:mount-propagation-7725 PodName:hostexec-node2-4z5zt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 13 23:25:18.513: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.598: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-7725"/host] Namespace:mount-propagation-7725 PodName:hostexec-node2-4z5zt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 13 23:25:18.598: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.697: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-7725 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.697: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.789: INFO: Exec stderr: "" May 13 23:25:18.791: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-7725 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.791: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.881: INFO: Exec stderr: "" May 13 23:25:18.885: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-7725 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.885: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:18.972: INFO: Exec stderr: "" May 13 23:25:18.976: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-7725 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:18.976: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:19.079: INFO: Exec stderr: "" May 13 23:25:19.079: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-7725"] Namespace:mount-propagation-7725 PodName:hostexec-node2-4z5zt ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} May 13 23:25:19.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-node2-4z5zt in namespace mount-propagation-7725 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:19.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-7725" for this suite. • [SLOW TEST:32.973 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":2,"skipped":56,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:19.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 13 23:25:19.311: INFO: Waiting up to 5m0s for pod "security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568" in namespace "security-context-2325" to be "Succeeded or Failed" May 13 23:25:19.313: INFO: Pod "security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092808ms May 13 23:25:21.317: INFO: Pod "security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005841245s May 13 23:25:23.321: INFO: Pod "security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009814359s STEP: Saw pod success May 13 23:25:23.321: INFO: Pod "security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568" satisfied condition "Succeeded or Failed" May 13 23:25:23.323: INFO: Trying to get logs from node node1 pod security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568 container test-container: STEP: delete the pod May 13 23:25:23.334: INFO: Waiting for pod security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568 to disappear May 13 23:25:23.337: INFO: Pod security-context-dfa4e3d0-9829-4eaa-9f83-e2cc26801568 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:23.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-2325" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":3,"skipped":90,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:23.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:23.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2348" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":106,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:04.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 STEP: Creating pod liveness-de0d9cca-f20c-4e60-bb7d-3a019a955341 in namespace container-probe-100 May 13 23:25:08.611: INFO: Started pod liveness-de0d9cca-f20c-4e60-bb7d-3a019a955341 in namespace container-probe-100 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:25:08.614: INFO: Initial restart count of pod liveness-de0d9cca-f20c-4e60-bb7d-3a019a955341 is 0 May 13 23:25:28.654: INFO: Restart count of pod container-probe-100/liveness-de0d9cca-f20c-4e60-bb7d-3a019a955341 is now 1 (20.040588493s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:28.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-100" for this suite. • [SLOW TEST:24.109 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:274 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":5,"skipped":647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:23.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:29.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9445" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":-1,"completed":5,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:29.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:29.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-5465" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":6,"skipped":298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:28.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container May 13 23:25:28.785: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:30.788: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:32.789: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container May 13 23:25:32.792: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-3292 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:32.792: INFO: >>> kubeConfig: /root/.kube/config May 13 23:25:32.879: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-3292 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:32.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container May 13 23:25:32.985: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-3292 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 13 23:25:32.985: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:33.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-3292" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":6,"skipped":684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:33.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should reject a Pod requesting a RuntimeClass with conflicting node selector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/runtimeclass.go:41 [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:33.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-3564" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":7,"skipped":725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:30.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:34.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8337" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":7,"skipped":350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0513 23:24:38.274633 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.275: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.276: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 STEP: Creating pod busybox-c42e1290-4304-4533-af16-54b7542a3f26 in namespace container-probe-1278 May 13 23:24:48.294: INFO: Started pod busybox-c42e1290-4304-4533-af16-54b7542a3f26 in namespace container-probe-1278 May 13 23:24:48.294: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (1.989µs elapsed) May 13 23:24:50.294: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (2.000504538s elapsed) May 13 23:24:52.296: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (4.002178569s elapsed) May 13 23:24:54.300: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (6.006103949s elapsed) May 13 23:24:56.301: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (8.007402204s elapsed) May 13 23:24:58.302: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (10.008079654s elapsed) May 13 23:25:00.303: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (12.009620576s elapsed) May 13 23:25:02.307: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (14.013320449s elapsed) May 13 23:25:04.310: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (16.015894575s elapsed) May 13 23:25:06.310: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (18.016140196s elapsed) May 13 23:25:08.311: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (20.017241324s elapsed) May 13 23:25:10.312: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (22.018485299s elapsed) May 13 23:25:12.315: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (24.021455671s elapsed) May 13 23:25:14.317: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (26.023055606s elapsed) May 13 23:25:16.318: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (28.024156847s elapsed) May 13 23:25:18.319: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (30.025193286s elapsed) May 13 23:25:20.322: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (32.028623428s elapsed) May 13 23:25:22.325: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (34.031020115s elapsed) May 13 23:25:24.327: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (36.032884103s elapsed) May 13 23:25:26.327: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (38.033665216s elapsed) May 13 23:25:28.329: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (40.034849727s elapsed) May 13 23:25:30.332: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (42.038022671s elapsed) May 13 23:25:32.335: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (44.041238509s elapsed) May 13 23:25:34.336: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (46.042101478s elapsed) May 13 23:25:36.336: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (48.042238777s elapsed) May 13 23:25:38.337: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (50.043563424s elapsed) May 13 23:25:40.341: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (52.046894979s elapsed) May 13 23:25:42.343: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (54.049323692s elapsed) May 13 23:25:44.345: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (56.050850375s elapsed) May 13 23:25:46.346: INFO: pod container-probe-1278/busybox-c42e1290-4304-4533-af16-54b7542a3f26 is not ready (58.051761145s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:48.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1278" for this suite. • [SLOW TEST:70.113 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:237 ------------------------------ {"msg":"PASSED [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":1,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:48.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 13 23:25:48.591: INFO: Waiting up to 5m0s for pod "security-context-41c14d10-604b-49e6-9893-c404ae532a66" in namespace "security-context-478" to be "Succeeded or Failed" May 13 23:25:48.593: INFO: Pod "security-context-41c14d10-604b-49e6-9893-c404ae532a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336673ms May 13 23:25:50.595: INFO: Pod "security-context-41c14d10-604b-49e6-9893-c404ae532a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004856383s May 13 23:25:52.599: INFO: Pod "security-context-41c14d10-604b-49e6-9893-c404ae532a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008685845s STEP: Saw pod success May 13 23:25:52.599: INFO: Pod "security-context-41c14d10-604b-49e6-9893-c404ae532a66" satisfied condition "Succeeded or Failed" May 13 23:25:52.601: INFO: Trying to get logs from node node2 pod security-context-41c14d10-604b-49e6-9893-c404ae532a66 container test-container: STEP: delete the pod May 13 23:25:52.633: INFO: Waiting for pod security-context-41c14d10-604b-49e6-9893-c404ae532a66 to disappear May 13 23:25:52.636: INFO: Pod security-context-41c14d10-604b-49e6-9893-c404ae532a66 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:52.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-478" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":2,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:52.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:25:57.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3266" for this suite. • [SLOW TEST:5.071 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":3,"skipped":196,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:10.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 STEP: Creating pod busybox-d7df3a18-2fb0-414f-a60f-4ebf8e9e049d in namespace container-probe-5051 May 13 23:25:14.332: INFO: Started pod busybox-d7df3a18-2fb0-414f-a60f-4ebf8e9e049d in namespace container-probe-5051 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:25:14.335: INFO: Initial restart count of pod busybox-d7df3a18-2fb0-414f-a60f-4ebf8e9e049d is 0 May 13 23:26:04.455: INFO: Restart count of pod container-probe-5051/busybox-d7df3a18-2fb0-414f-a60f-4ebf8e9e049d is now 1 (50.120755523s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:04.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5051" for this suite. • [SLOW TEST:54.179 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:217 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]","total":-1,"completed":5,"skipped":945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:04.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 13 23:26:04.595: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod May 13 23:26:04.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7491 create -f -' May 13 23:26:05.099: INFO: stderr: "" May 13 23:26:05.099: INFO: stdout: "secret/test-secret created\n" May 13 23:26:05.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7491 create -f -' May 13 23:26:05.431: INFO: stderr: "" May 13 23:26:05.431: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly May 13 23:26:11.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-7491 logs secret-test-pod test-container' May 13 23:26:11.618: INFO: stderr: "" May 13 23:26:11.618: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\r\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:11.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-7491" for this suite. • [SLOW TEST:7.062 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":-1,"completed":6,"skipped":985,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:13.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 STEP: Creating pod startup-108bb638-11e7-46e5-8b30-4a775c6e2b2a in namespace container-probe-77 May 13 23:25:17.415: INFO: Started pod startup-108bb638-11e7-46e5-8b30-4a775c6e2b2a in namespace container-probe-77 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:25:17.418: INFO: Initial restart count of pod startup-108bb638-11e7-46e5-8b30-4a775c6e2b2a is 0 May 13 23:26:17.553: INFO: Restart count of pod container-probe-77/startup-108bb638-11e7-46e5-8b30-4a775c6e2b2a is now 1 (1m0.135497946s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:17.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-77" for this suite. • [SLOW TEST:64.199 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:371 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":6,"skipped":494,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:17.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 13 23:26:17.647: INFO: Waiting up to 5m0s for pod "security-context-6a1656e3-cebe-40dd-92f8-3c397d210982" in namespace "security-context-8190" to be "Succeeded or Failed" May 13 23:26:17.649: INFO: Pod "security-context-6a1656e3-cebe-40dd-92f8-3c397d210982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136608ms May 13 23:26:19.651: INFO: Pod "security-context-6a1656e3-cebe-40dd-92f8-3c397d210982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004780189s May 13 23:26:21.655: INFO: Pod "security-context-6a1656e3-cebe-40dd-92f8-3c397d210982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008637306s STEP: Saw pod success May 13 23:26:21.655: INFO: Pod "security-context-6a1656e3-cebe-40dd-92f8-3c397d210982" satisfied condition "Succeeded or Failed" May 13 23:26:21.659: INFO: Trying to get logs from node node2 pod security-context-6a1656e3-cebe-40dd-92f8-3c397d210982 container test-container: STEP: delete the pod May 13 23:26:21.675: INFO: Waiting for pod security-context-6a1656e3-cebe-40dd-92f8-3c397d210982 to disappear May 13 23:26:21.677: INFO: Pod security-context-6a1656e3-cebe-40dd-92f8-3c397d210982 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:21.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8190" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":7,"skipped":513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:11.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination May 13 23:26:35.918: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:35.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6829" for this suite. • [SLOW TEST:24.084 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":7,"skipped":1077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:36.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 May 13 23:26:36.361: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:36.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-7767" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.032 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:284 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:36.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod May 13 23:26:36.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9111 create -f -' May 13 23:26:37.091: INFO: stderr: "" May 13 23:26:37.092: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly May 13 23:26:41.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9111 logs dapi-test-pod test-container' May 13 23:26:41.290: INFO: stderr: "" May 13 23:26:41.290: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9111\nMY_POD_IP=10.244.4.132\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" May 13 23:26:41.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9111 logs dapi-test-pod test-container' May 13 23:26:41.463: INFO: stderr: "" May 13 23:26:41.463: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.233.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-9111\nMY_POD_IP=10.244.4.132\nKUBERNETES_PORT_443_TCP_ADDR=10.233.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=10.10.190.208\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_PORT_443_TCP=tcp://10.233.0.1:443\nKUBERNETES_SERVICE_HOST=10.233.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:41.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9111" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":-1,"completed":8,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:34.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 May 13 23:25:34.312: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 May 13 23:25:34.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9838 create -f -' May 13 23:25:34.853: INFO: stderr: "" May 13 23:25:34.853: INFO: stdout: "pod/liveness-exec created\n" May 13 23:25:34.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=examples-9838 create -f -' May 13 23:25:35.215: INFO: stderr: "" May 13 23:25:35.215: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts May 13 23:25:39.225: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:39.225: INFO: Pod: liveness-http, restart count:0 May 13 23:25:41.229: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:41.229: INFO: Pod: liveness-http, restart count:0 May 13 23:25:43.233: INFO: Pod: liveness-http, restart count:0 May 13 23:25:43.233: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:45.237: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:45.237: INFO: Pod: liveness-http, restart count:0 May 13 23:25:47.240: INFO: Pod: liveness-http, restart count:0 May 13 23:25:47.240: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:49.244: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:49.244: INFO: Pod: liveness-http, restart count:0 May 13 23:25:51.248: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:51.248: INFO: Pod: liveness-http, restart count:0 May 13 23:25:53.251: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:53.251: INFO: Pod: liveness-http, restart count:0 May 13 23:25:55.256: INFO: Pod: liveness-http, restart count:0 May 13 23:25:55.256: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:57.259: INFO: Pod: liveness-exec, restart count:0 May 13 23:25:57.259: INFO: Pod: liveness-http, restart count:0 May 13 23:25:59.264: INFO: Pod: liveness-http, restart count:0 May 13 23:25:59.264: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:01.267: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:01.267: INFO: Pod: liveness-http, restart count:0 May 13 23:26:03.271: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:03.271: INFO: Pod: liveness-http, restart count:0 May 13 23:26:05.275: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:05.275: INFO: Pod: liveness-http, restart count:0 May 13 23:26:07.279: INFO: Pod: liveness-http, restart count:0 May 13 23:26:07.279: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:09.284: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:09.284: INFO: Pod: liveness-http, restart count:0 May 13 23:26:11.287: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:11.288: INFO: Pod: liveness-http, restart count:0 May 13 23:26:13.291: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:13.291: INFO: Pod: liveness-http, restart count:0 May 13 23:26:15.296: INFO: Pod: liveness-http, restart count:0 May 13 23:26:15.296: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:17.300: INFO: Pod: liveness-http, restart count:1 May 13 23:26:17.300: INFO: Saw liveness-http restart, succeeded... May 13 23:26:17.300: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:19.304: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:21.307: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:23.316: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:25.320: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:27.323: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:29.327: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:31.331: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:33.334: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:35.338: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:37.342: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:39.348: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:41.351: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:43.356: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:45.360: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:47.364: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:49.369: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:51.372: INFO: Pod: liveness-exec, restart count:0 May 13 23:26:53.375: INFO: Pod: liveness-exec, restart count:1 May 13 23:26:53.375: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:53.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-9838" for this suite. • [SLOW TEST:79.100 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":-1,"completed":8,"skipped":475,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:57.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 STEP: Creating pod busybox-3f751a42-b5e8-4301-bd36-bac6c9c773cd in namespace container-probe-4188 May 13 23:26:01.907: INFO: Started pod busybox-3f751a42-b5e8-4301-bd36-bac6c9c773cd in namespace container-probe-4188 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:26:01.910: INFO: Initial restart count of pod busybox-3f751a42-b5e8-4301-bd36-bac6c9c773cd is 0 May 13 23:26:58.031: INFO: Restart count of pod container-probe-4188/busybox-3f751a42-b5e8-4301-bd36-bac6c9c773cd is now 1 (56.121554825s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:26:58.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4188" for this suite. • [SLOW TEST:60.178 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a failing exec liveness probe that took longer than the timeout /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:254 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout","total":-1,"completed":4,"skipped":211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:58.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod May 13 23:26:58.421: INFO: Waiting up to 5m0s for pod "security-context-e11ca816-5667-49dd-af40-8f673d3853b1" in namespace "security-context-870" to be "Succeeded or Failed" May 13 23:26:58.423: INFO: Pod "security-context-e11ca816-5667-49dd-af40-8f673d3853b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1859ms May 13 23:27:00.426: INFO: Pod "security-context-e11ca816-5667-49dd-af40-8f673d3853b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005115101s May 13 23:27:02.432: INFO: Pod "security-context-e11ca816-5667-49dd-af40-8f673d3853b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010998899s May 13 23:27:04.440: INFO: Pod "security-context-e11ca816-5667-49dd-af40-8f673d3853b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019690874s STEP: Saw pod success May 13 23:27:04.440: INFO: Pod "security-context-e11ca816-5667-49dd-af40-8f673d3853b1" satisfied condition "Succeeded or Failed" May 13 23:27:04.443: INFO: Trying to get logs from node node2 pod security-context-e11ca816-5667-49dd-af40-8f673d3853b1 container test-container: STEP: delete the pod May 13 23:27:04.456: INFO: Waiting for pod security-context-e11ca816-5667-49dd-af40-8f673d3853b1 to disappear May 13 23:27:04.459: INFO: Pod security-context-e11ca816-5667-49dd-af40-8f673d3853b1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:04.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-870" for this suite. • [SLOW TEST:6.083 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":5,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:41.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e in namespace kubelet-8990 I0513 23:26:41.614887 39 runners.go:190] Created replication controller with name: cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e, namespace: kubelet-8990, replica count: 20 I0513 23:26:51.666408 39 runners.go:190] cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e Pods: 20 out of 20 created, 4 running, 16 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0513 23:27:01.667799 39 runners.go:190] cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 13 23:27:02.668: INFO: Checking pods on node node2 via /runningpods endpoint May 13 23:27:02.668: INFO: Checking pods on node node1 via /runningpods endpoint May 13 23:27:02.690: INFO: Resource usage on node "master2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.256 3520.82 1495.64 "runtime" 0.086 571.17 240.53 "kubelet" 0.086 571.17 240.53 Resource usage on node "master3": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.540 3833.28 1723.44 "runtime" 0.112 523.63 245.90 "kubelet" 0.112 523.63 245.90 Resource usage on node "node1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "kubelet" 1.067 2612.14 553.76 "/" 1.996 6449.95 2393.25 "runtime" 1.067 2612.14 553.76 Resource usage on node "node2": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 1.979 4178.01 1358.70 "runtime" 1.022 1496.67 552.51 "kubelet" 1.022 1496.67 552.51 Resource usage on node "master1": container cpu(cores) memory_working_set(MB) memory_rss(MB) "/" 0.356 4779.99 1608.34 "runtime" 0.118 694.18 286.57 "kubelet" 0.118 694.18 286.57 STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e in namespace kubelet-8990, will wait for the garbage collector to delete the pods May 13 23:27:02.748: INFO: Deleting ReplicationController cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e took: 5.277552ms May 13 23:27:03.348: INFO: Terminating ReplicationController cleanup20-6456419c-5ba6-4e98-8be0-24d9ebb7e53e pods took: 600.304012ms May 13 23:27:17.850: INFO: Checking pods on node node2 via /runningpods endpoint May 13 23:27:17.850: INFO: Checking pods on node node1 via /runningpods endpoint May 13 23:27:17.898: INFO: Deleting 20 pods on 2 nodes completed in 1.049802469s after the RC was deleted May 13 23:27:17.899: INFO: CPU usage of containers on node "node2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.266 1.690 1.690 1.690 1.690 "runtime" 0.000 0.000 0.943 0.943 0.943 0.943 0.943 "kubelet" 0.000 0.000 0.943 0.943 0.943 0.943 0.943 CPU usage of containers on node "master1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.356 0.392 0.392 0.392 0.392 "runtime" 0.000 0.000 0.118 0.118 0.118 0.118 0.118 "kubelet" 0.000 0.000 0.118 0.118 0.118 0.118 0.118 CPU usage of containers on node "master2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.256 0.330 0.330 0.330 0.330 "runtime" 0.000 0.000 0.086 0.086 0.086 0.086 0.086 "kubelet" 0.000 0.000 0.086 0.086 0.086 0.086 0.086 CPU usage of containers on node "master3" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 0.412 0.412 0.433 0.433 0.433 "runtime" 0.000 0.000 0.105 0.105 0.105 0.105 0.105 "kubelet" 0.000 0.000 0.105 0.105 0.105 0.105 0.105 CPU usage of containers on node "node1" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.000 1.396 1.834 1.834 1.834 1.834 "runtime" 0.000 0.000 0.745 0.745 0.745 0.745 0.745 "kubelet" 0.000 0.000 0.745 0.745 0.745 0.745 0.745 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node node1 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node node2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:17.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-8990" for this suite. • [SLOW TEST:36.372 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":-1,"completed":9,"skipped":1458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:27:04.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 STEP: Creating pod liveness-override-150d860f-5ab1-4637-a9d4-2009fabee4b5 in namespace container-probe-5066 May 13 23:27:14.723: INFO: Started pod liveness-override-150d860f-5ab1-4637-a9d4-2009fabee4b5 in namespace container-probe-5066 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:27:14.726: INFO: Initial restart count of pod liveness-override-150d860f-5ab1-4637-a9d4-2009fabee4b5 is 0 May 13 23:27:18.735: INFO: Restart count of pod container-probe-5066/liveness-override-150d860f-5ab1-4637-a9d4-2009fabee4b5 is now 1 (4.008927687s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:18.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5066" for this suite. • [SLOW TEST:14.068 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:449 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":6,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:53.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true May 13 23:27:12.463: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false May 13 23:27:13.462: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false May 13 23:27:14.464: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false STEP: patching pod status with condition "k8s.io/test-condition1" to false May 13 23:27:16.474: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true May 13 23:27:17.474: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true May 13 23:27:18.473: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true May 13 23:27:19.476: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:20.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6874" for this suite. • [SLOW TEST:27.081 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:777 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":9,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:27:18.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:201 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:20.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-5462" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]","total":-1,"completed":7,"skipped":514,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 13 23:27:21.021: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:27:18.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 STEP: Creating pod startup-override-223213b1-f47b-4a30-b63e-ff9dfd5da99a in namespace container-probe-4717 May 13 23:27:22.750: INFO: Started pod startup-override-223213b1-f47b-4a30-b63e-ff9dfd5da99a in namespace container-probe-4717 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:27:22.753: INFO: Initial restart count of pod startup-override-223213b1-f47b-4a30-b63e-ff9dfd5da99a is 0 May 13 23:27:24.760: INFO: Restart count of pod container-probe-4717/startup-override-223213b1-f47b-4a30-b63e-ff9dfd5da99a is now 1 (2.007363236s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:24.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4717" for this suite. • [SLOW TEST:6.065 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:477 ------------------------------ {"msg":"PASSED [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]","total":-1,"completed":10,"skipped":1880,"failed":0} May 13 23:27:24.776: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:50.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay May 13 23:24:54.406: INFO: watch delete seen for pod-submit-status-0-0 May 13 23:24:54.406: INFO: Pod pod-submit-status-0-0 on node node2 timings total=3.755290497s t=160ms run=0s execute=0s May 13 23:24:58.420: INFO: watch delete seen for pod-submit-status-0-1 May 13 23:24:58.420: INFO: Pod pod-submit-status-0-1 on node node1 timings total=4.014391599s t=227ms run=0s execute=0s May 13 23:25:02.314: INFO: watch delete seen for pod-submit-status-1-0 May 13 23:25:02.314: INFO: Pod pod-submit-status-1-0 on node node1 timings total=11.663548903s t=1.709s run=3s execute=0s May 13 23:25:03.096: INFO: watch delete seen for pod-submit-status-2-0 May 13 23:25:03.096: INFO: Pod pod-submit-status-2-0 on node node2 timings total=12.445019637s t=1.808s run=0s execute=0s May 13 23:25:12.350: INFO: watch delete seen for pod-submit-status-2-1 May 13 23:25:12.350: INFO: Pod pod-submit-status-2-1 on node node2 timings total=9.254115336s t=1.758s run=0s execute=0s May 13 23:25:12.361: INFO: watch delete seen for pod-submit-status-1-1 May 13 23:25:12.361: INFO: Pod pod-submit-status-1-1 on node node2 timings total=10.046575704s t=1.468s run=0s execute=0s May 13 23:25:22.305: INFO: watch delete seen for pod-submit-status-2-2 May 13 23:25:22.305: INFO: Pod pod-submit-status-2-2 on node node1 timings total=9.955405162s t=598ms run=0s execute=0s May 13 23:25:22.317: INFO: watch delete seen for pod-submit-status-1-2 May 13 23:25:22.317: INFO: Pod pod-submit-status-1-2 on node node1 timings total=9.956462703s t=1.572s run=0s execute=0s May 13 23:25:29.199: INFO: watch delete seen for pod-submit-status-1-3 May 13 23:25:29.199: INFO: Pod pod-submit-status-1-3 on node node2 timings total=6.881344632s t=1.357s run=0s execute=0s May 13 23:25:35.609: INFO: watch delete seen for pod-submit-status-2-3 May 13 23:25:35.610: INFO: Pod pod-submit-status-2-3 on node node2 timings total=13.304181788s t=1.822s run=0s execute=0s May 13 23:25:36.199: INFO: watch delete seen for pod-submit-status-0-2 May 13 23:25:36.199: INFO: Pod pod-submit-status-0-2 on node node2 timings total=37.778693599s t=716ms run=0s execute=0s May 13 23:25:42.307: INFO: watch delete seen for pod-submit-status-2-4 May 13 23:25:42.307: INFO: Pod pod-submit-status-2-4 on node node1 timings total=6.697603495s t=187ms run=0s execute=0s May 13 23:25:42.316: INFO: watch delete seen for pod-submit-status-1-4 May 13 23:25:42.316: INFO: Pod pod-submit-status-1-4 on node node1 timings total=13.117128529s t=561ms run=0s execute=0s May 13 23:25:42.363: INFO: watch delete seen for pod-submit-status-0-3 May 13 23:25:42.364: INFO: Pod pod-submit-status-0-3 on node node2 timings total=6.16435792s t=1.643s run=0s execute=0s May 13 23:25:44.902: INFO: watch delete seen for pod-submit-status-0-4 May 13 23:25:44.902: INFO: Pod pod-submit-status-0-4 on node node2 timings total=2.53802708s t=986ms run=0s execute=0s May 13 23:25:52.311: INFO: watch delete seen for pod-submit-status-0-5 May 13 23:25:52.311: INFO: Pod pod-submit-status-0-5 on node node1 timings total=7.409241057s t=1.416s run=3s execute=0s May 13 23:25:52.375: INFO: watch delete seen for pod-submit-status-1-5 May 13 23:25:52.375: INFO: Pod pod-submit-status-1-5 on node node2 timings total=10.059328748s t=1.243s run=0s execute=0s May 13 23:25:52.383: INFO: watch delete seen for pod-submit-status-2-5 May 13 23:25:52.383: INFO: Pod pod-submit-status-2-5 on node node2 timings total=10.07592053s t=1.305s run=0s execute=0s May 13 23:25:55.541: INFO: watch delete seen for pod-submit-status-2-6 May 13 23:25:55.541: INFO: Pod pod-submit-status-2-6 on node node2 timings total=3.158155736s t=965ms run=0s execute=0s May 13 23:26:02.304: INFO: watch delete seen for pod-submit-status-1-6 May 13 23:26:02.304: INFO: Pod pod-submit-status-1-6 on node node1 timings total=9.929021514s t=24ms run=0s execute=0s May 13 23:26:02.351: INFO: watch delete seen for pod-submit-status-0-6 May 13 23:26:02.351: INFO: Pod pod-submit-status-0-6 on node node2 timings total=10.039908846s t=1.825s run=0s execute=0s May 13 23:26:05.221: INFO: watch delete seen for pod-submit-status-1-7 May 13 23:26:05.221: INFO: Pod pod-submit-status-1-7 on node node2 timings total=2.916253802s t=580ms run=0s execute=0s May 13 23:26:07.016: INFO: watch delete seen for pod-submit-status-1-8 May 13 23:26:07.016: INFO: Pod pod-submit-status-1-8 on node node2 timings total=1.795476869s t=561ms run=0s execute=0s May 13 23:26:08.865: INFO: watch delete seen for pod-submit-status-1-9 May 13 23:26:08.865: INFO: Pod pod-submit-status-1-9 on node node2 timings total=1.848414854s t=364ms run=0s execute=0s May 13 23:26:11.419: INFO: watch delete seen for pod-submit-status-1-10 May 13 23:26:11.419: INFO: Pod pod-submit-status-1-10 on node node2 timings total=2.554339022s t=1.074s run=0s execute=0s May 13 23:26:12.305: INFO: watch delete seen for pod-submit-status-2-7 May 13 23:26:12.305: INFO: Pod pod-submit-status-2-7 on node node1 timings total=16.763776184s t=1.578s run=3s execute=0s May 13 23:26:12.325: INFO: watch delete seen for pod-submit-status-0-7 May 13 23:26:12.325: INFO: Pod pod-submit-status-0-7 on node node1 timings total=9.974235982s t=712ms run=0s execute=0s May 13 23:26:22.306: INFO: watch delete seen for pod-submit-status-0-8 May 13 23:26:22.306: INFO: Pod pod-submit-status-0-8 on node node1 timings total=9.980900659s t=1.594s run=0s execute=0s May 13 23:26:22.354: INFO: watch delete seen for pod-submit-status-1-11 May 13 23:26:22.354: INFO: Pod pod-submit-status-1-11 on node node2 timings total=10.934748151s t=1.141s run=0s execute=0s May 13 23:26:32.310: INFO: watch delete seen for pod-submit-status-1-12 May 13 23:26:32.310: INFO: Pod pod-submit-status-1-12 on node node1 timings total=9.955919054s t=301ms run=0s execute=0s May 13 23:26:32.346: INFO: watch delete seen for pod-submit-status-0-9 May 13 23:26:32.346: INFO: Pod pod-submit-status-0-9 on node node2 timings total=10.040172413s t=1.915s run=0s execute=0s May 13 23:26:35.233: INFO: watch delete seen for pod-submit-status-2-8 May 13 23:26:35.234: INFO: Pod pod-submit-status-2-8 on node node2 timings total=22.928229745s t=1.27s run=0s execute=0s May 13 23:26:44.819: INFO: watch delete seen for pod-submit-status-0-10 May 13 23:26:44.819: INFO: Pod pod-submit-status-0-10 on node node1 timings total=12.47293754s t=1.448s run=0s execute=0s May 13 23:26:45.823: INFO: watch delete seen for pod-submit-status-1-13 May 13 23:26:45.824: INFO: Pod pod-submit-status-1-13 on node node1 timings total=13.513523875s t=776ms run=0s execute=0s May 13 23:26:46.422: INFO: watch delete seen for pod-submit-status-2-9 May 13 23:26:46.422: INFO: Pod pod-submit-status-2-9 on node node2 timings total=11.188281886s t=990ms run=0s execute=0s May 13 23:26:48.623: INFO: watch delete seen for pod-submit-status-2-10 May 13 23:26:48.623: INFO: Pod pod-submit-status-2-10 on node node1 timings total=2.20137645s t=581ms run=0s execute=0s May 13 23:26:52.236: INFO: watch delete seen for pod-submit-status-0-11 May 13 23:26:52.236: INFO: Pod pod-submit-status-0-11 on node node1 timings total=7.416180411s t=353ms run=0s execute=0s May 13 23:26:52.820: INFO: watch delete seen for pod-submit-status-1-14 May 13 23:26:52.820: INFO: Pod pod-submit-status-1-14 on node node1 timings total=6.996217818s t=84ms run=0s execute=0s May 13 23:26:55.822: INFO: watch delete seen for pod-submit-status-2-11 May 13 23:26:55.822: INFO: Pod pod-submit-status-2-11 on node node1 timings total=7.198386956s t=1.689s run=0s execute=0s May 13 23:26:58.821: INFO: watch delete seen for pod-submit-status-0-12 May 13 23:26:58.821: INFO: Pod pod-submit-status-0-12 on node node1 timings total=6.585600083s t=374ms run=0s execute=0s May 13 23:27:01.209: INFO: watch delete seen for pod-submit-status-2-12 May 13 23:27:01.209: INFO: Pod pod-submit-status-2-12 on node node2 timings total=5.387314795s t=1.205s run=0s execute=0s May 13 23:27:08.624: INFO: watch delete seen for pod-submit-status-0-13 May 13 23:27:08.624: INFO: Pod pod-submit-status-0-13 on node node1 timings total=9.803074159s t=216ms run=0s execute=0s May 13 23:27:16.021: INFO: watch delete seen for pod-submit-status-2-13 May 13 23:27:16.021: INFO: Pod pod-submit-status-2-13 on node node1 timings total=14.811949216s t=1.725s run=0s execute=0s May 13 23:27:16.421: INFO: watch delete seen for pod-submit-status-0-14 May 13 23:27:16.421: INFO: Pod pod-submit-status-0-14 on node node1 timings total=7.796642881s t=1.169s run=0s execute=0s May 13 23:27:32.353: INFO: watch delete seen for pod-submit-status-2-14 May 13 23:27:32.353: INFO: Pod pod-submit-status-2-14 on node node2 timings total=16.331416348s t=1.67s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:32.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3770" for this suite. • [SLOW TEST:161.736 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":-1,"completed":2,"skipped":227,"failed":0} May 13 23:27:32.366: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:26:22.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 STEP: Creating pod startup-880389cb-02c3-40f4-ab48-4926de79d833 in namespace container-probe-3614 May 13 23:26:26.414: INFO: Started pod startup-880389cb-02c3-40f4-ab48-4926de79d833 in namespace container-probe-3614 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:26:26.417: INFO: Initial restart count of pod startup-880389cb-02c3-40f4-ab48-4926de79d833 is 0 May 13 23:27:34.599: INFO: Restart count of pod container-probe-3614/startup-880389cb-02c3-40f4-ab48-4926de79d833 is now 1 (1m8.182635439s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:34.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3614" for this suite. • [SLOW TEST:72.242 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:313 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":-1,"completed":8,"skipped":888,"failed":0} May 13 23:27:34.618: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:27:20.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 May 13 23:27:42.646: INFO: The status of Pod startup-1e2f3702-3809-4824-96ce-e12450cdded9 is Running (Ready = true) May 13 23:27:42.648: INFO: Container started at 2022-05-13 23:27:42.643475718 +0000 UTC m=+186.447875550, pod became ready at 2022-05-13 23:27:42.646900941 +0000 UTC m=+186.451300773, 3.425223ms after startupProbe succeeded [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:27:42.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-240" for this suite. • [SLOW TEST:22.063 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:400 ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0513 23:24:38.428963 24 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.429: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.430: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 STEP: Creating pod startup-c60a549e-0671-4dc8-a063-04c4b583c39c in namespace container-probe-1883 May 13 23:24:48.450: INFO: Started pod startup-c60a549e-0671-4dc8-a063-04c4b583c39c in namespace container-probe-1883 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:24:48.452: INFO: Initial restart count of pod startup-c60a549e-0671-4dc8-a063-04c4b583c39c is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:28:48.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1883" for this suite. • [SLOW TEST:250.580 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:342 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":-1,"completed":1,"skipped":151,"failed":0} May 13 23:28:48.984: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:46.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod liveness-2f138916-580a-4a1e-962f-cda85f4089fa in namespace container-probe-2398 May 13 23:24:50.288: INFO: Started pod liveness-2f138916-580a-4a1e-962f-cda85f4089fa in namespace container-probe-2398 STEP: checking the pod's current state and verifying that restartCount is present May 13 23:24:50.291: INFO: Initial restart count of pod liveness-2f138916-580a-4a1e-962f-cda85f4089fa is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:28:50.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2398" for this suite. • [SLOW TEST:244.591 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":-1,"completed":2,"skipped":79,"failed":0} May 13 23:28:50.838: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:33.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready May 13 23:25:33.427: INFO: Waiting up to 5m0s for node node2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration May 13 23:25:34.438: INFO: node status heartbeat is unchanged for 1.003066741s, waiting for 1m20s May 13 23:25:35.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:25:35.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:35 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:35 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:35 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:25:36.438: INFO: node status heartbeat is unchanged for 998.205774ms, waiting for 1m20s May 13 23:25:37.439: INFO: node status heartbeat is unchanged for 1.999782609s, waiting for 1m20s May 13 23:25:38.439: INFO: node status heartbeat is unchanged for 2.999263848s, waiting for 1m20s May 13 23:25:39.440: INFO: node status heartbeat is unchanged for 4.000143364s, waiting for 1m20s May 13 23:25:40.439: INFO: node status heartbeat is unchanged for 4.999571075s, waiting for 1m20s May 13 23:25:41.438: INFO: node status heartbeat is unchanged for 5.998631479s, waiting for 1m20s May 13 23:25:42.440: INFO: node status heartbeat is unchanged for 7.000858393s, waiting for 1m20s May 13 23:25:43.439: INFO: node status heartbeat is unchanged for 8.000021546s, waiting for 1m20s May 13 23:25:44.440: INFO: node status heartbeat is unchanged for 9.000522541s, waiting for 1m20s May 13 23:25:45.439: INFO: node status heartbeat is unchanged for 9.999713015s, waiting for 1m20s May 13 23:25:46.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:25:46.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:45 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:45 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:45 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:25:47.440: INFO: node status heartbeat is unchanged for 1.001344122s, waiting for 1m20s May 13 23:25:48.438: INFO: node status heartbeat is unchanged for 1.999072444s, waiting for 1m20s May 13 23:25:49.440: INFO: node status heartbeat is unchanged for 3.001525463s, waiting for 1m20s May 13 23:25:50.438: INFO: node status heartbeat is unchanged for 3.999441886s, waiting for 1m20s May 13 23:25:51.438: INFO: node status heartbeat is unchanged for 4.999480258s, waiting for 1m20s May 13 23:25:52.441: INFO: node status heartbeat is unchanged for 6.001959853s, waiting for 1m20s May 13 23:25:53.438: INFO: node status heartbeat is unchanged for 6.999017064s, waiting for 1m20s May 13 23:25:54.439: INFO: node status heartbeat is unchanged for 8.000057039s, waiting for 1m20s May 13 23:25:55.438: INFO: node status heartbeat is unchanged for 8.999377191s, waiting for 1m20s May 13 23:25:56.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:25:56.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:55 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:55 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:55 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:25:57.440: INFO: node status heartbeat is unchanged for 1.000651774s, waiting for 1m20s May 13 23:25:58.439: INFO: node status heartbeat is unchanged for 1.999206551s, waiting for 1m20s May 13 23:25:59.439: INFO: node status heartbeat is unchanged for 2.999873051s, waiting for 1m20s May 13 23:26:00.439: INFO: node status heartbeat is unchanged for 3.999911722s, waiting for 1m20s May 13 23:26:01.439: INFO: node status heartbeat is unchanged for 4.999763306s, waiting for 1m20s May 13 23:26:02.440: INFO: node status heartbeat is unchanged for 6.000374985s, waiting for 1m20s May 13 23:26:03.439: INFO: node status heartbeat is unchanged for 6.999588428s, waiting for 1m20s May 13 23:26:04.441: INFO: node status heartbeat is unchanged for 8.001228597s, waiting for 1m20s May 13 23:26:05.439: INFO: node status heartbeat is unchanged for 9.000005378s, waiting for 1m20s May 13 23:26:06.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:26:06.448: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:05 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:05 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:25:55 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:05 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:26:07.441: INFO: node status heartbeat is unchanged for 1.001568428s, waiting for 1m20s May 13 23:26:08.440: INFO: node status heartbeat is unchanged for 2.001158802s, waiting for 1m20s May 13 23:26:09.439: INFO: node status heartbeat is unchanged for 2.999333829s, waiting for 1m20s May 13 23:26:10.439: INFO: node status heartbeat is unchanged for 3.999440813s, waiting for 1m20s May 13 23:26:11.439: INFO: node status heartbeat is unchanged for 4.999410115s, waiting for 1m20s May 13 23:26:12.439: INFO: node status heartbeat is unchanged for 6.000184938s, waiting for 1m20s May 13 23:26:13.439: INFO: node status heartbeat is unchanged for 6.999761118s, waiting for 1m20s May 13 23:26:14.438: INFO: node status heartbeat is unchanged for 7.998382508s, waiting for 1m20s May 13 23:26:15.440: INFO: node status heartbeat is unchanged for 9.001148524s, waiting for 1m20s May 13 23:26:16.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:26:16.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:15 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:15 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:05 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:15 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:26:17.439: INFO: node status heartbeat is unchanged for 1.000315554s, waiting for 1m20s May 13 23:26:18.438: INFO: node status heartbeat is unchanged for 1.999393354s, waiting for 1m20s May 13 23:26:19.438: INFO: node status heartbeat is unchanged for 2.999246268s, waiting for 1m20s May 13 23:26:20.440: INFO: node status heartbeat is unchanged for 4.001665126s, waiting for 1m20s May 13 23:26:21.439: INFO: node status heartbeat is unchanged for 4.999948459s, waiting for 1m20s May 13 23:26:22.439: INFO: node status heartbeat is unchanged for 6.000899888s, waiting for 1m20s May 13 23:26:23.438: INFO: node status heartbeat is unchanged for 6.999691248s, waiting for 1m20s May 13 23:26:24.439: INFO: node status heartbeat is unchanged for 7.999947945s, waiting for 1m20s May 13 23:26:25.439: INFO: node status heartbeat is unchanged for 9.000229658s, waiting for 1m20s May 13 23:26:26.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:26:26.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:15 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:25 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:26:27.439: INFO: node status heartbeat is unchanged for 1.000134419s, waiting for 1m20s May 13 23:26:28.440: INFO: node status heartbeat is unchanged for 2.001446371s, waiting for 1m20s May 13 23:26:29.438: INFO: node status heartbeat is unchanged for 2.999191888s, waiting for 1m20s May 13 23:26:30.440: INFO: node status heartbeat is unchanged for 4.001677078s, waiting for 1m20s May 13 23:26:31.438: INFO: node status heartbeat is unchanged for 4.999727911s, waiting for 1m20s May 13 23:26:32.440: INFO: node status heartbeat is unchanged for 6.00195921s, waiting for 1m20s May 13 23:26:33.440: INFO: node status heartbeat is unchanged for 7.001157576s, waiting for 1m20s May 13 23:26:34.438: INFO: node status heartbeat is unchanged for 7.999946313s, waiting for 1m20s May 13 23:26:35.439: INFO: node status heartbeat is unchanged for 9.000668613s, waiting for 1m20s May 13 23:26:36.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:26:36.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:35 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:35 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:25 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:35 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:26:37.440: INFO: node status heartbeat is unchanged for 1.001185037s, waiting for 1m20s May 13 23:26:38.438: INFO: node status heartbeat is unchanged for 1.999525166s, waiting for 1m20s May 13 23:26:39.441: INFO: node status heartbeat is unchanged for 3.001915473s, waiting for 1m20s May 13 23:26:40.441: INFO: node status heartbeat is unchanged for 4.001955318s, waiting for 1m20s May 13 23:26:41.438: INFO: node status heartbeat is unchanged for 4.998781006s, waiting for 1m20s May 13 23:26:42.441: INFO: node status heartbeat is unchanged for 6.002027719s, waiting for 1m20s May 13 23:26:43.439: INFO: node status heartbeat is unchanged for 7.000578089s, waiting for 1m20s May 13 23:26:44.440: INFO: node status heartbeat is unchanged for 8.000876241s, waiting for 1m20s May 13 23:26:45.442: INFO: node status heartbeat is unchanged for 9.002965343s, waiting for 1m20s May 13 23:26:46.438: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:26:46.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:45 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:45 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:35 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:45 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:26:47.441: INFO: node status heartbeat is unchanged for 1.002869342s, waiting for 1m20s May 13 23:26:48.438: INFO: node status heartbeat is unchanged for 1.999900004s, waiting for 1m20s May 13 23:26:49.441: INFO: node status heartbeat is unchanged for 3.002637876s, waiting for 1m20s May 13 23:26:50.439: INFO: node status heartbeat is unchanged for 4.001096159s, waiting for 1m20s May 13 23:26:51.439: INFO: node status heartbeat is unchanged for 5.000608549s, waiting for 1m20s May 13 23:26:52.438: INFO: node status heartbeat is unchanged for 6.000229404s, waiting for 1m20s May 13 23:26:53.438: INFO: node status heartbeat is unchanged for 7.00042517s, waiting for 1m20s May 13 23:26:54.437: INFO: node status heartbeat is unchanged for 7.999560463s, waiting for 1m20s May 13 23:26:55.438: INFO: node status heartbeat is unchanged for 9.000028248s, waiting for 1m20s May 13 23:26:56.439: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 13 23:26:56.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:56 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:56 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:45 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:56 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:26:57.441: INFO: node status heartbeat is unchanged for 1.001892468s, waiting for 1m20s May 13 23:26:58.438: INFO: node status heartbeat is unchanged for 1.999114489s, waiting for 1m20s May 13 23:26:59.441: INFO: node status heartbeat is unchanged for 3.001419988s, waiting for 1m20s May 13 23:27:00.440: INFO: node status heartbeat is unchanged for 4.000394455s, waiting for 1m20s May 13 23:27:01.440: INFO: node status heartbeat is unchanged for 5.000415617s, waiting for 1m20s May 13 23:27:02.440: INFO: node status heartbeat is unchanged for 6.000823095s, waiting for 1m20s May 13 23:27:03.439: INFO: node status heartbeat is unchanged for 6.999312494s, waiting for 1m20s May 13 23:27:04.441: INFO: node status heartbeat is unchanged for 8.001437173s, waiting for 1m20s May 13 23:27:05.439: INFO: node status heartbeat is unchanged for 9.000063s, waiting for 1m20s May 13 23:27:06.438: INFO: node status heartbeat is unchanged for 9.999097173s, waiting for 1m20s May 13 23:27:07.442: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:27:07.447: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:06 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:06 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:26:56 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:06 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:27:08.439: INFO: node status heartbeat is unchanged for 997.122154ms, waiting for 1m20s May 13 23:27:09.441: INFO: node status heartbeat is unchanged for 1.998887617s, waiting for 1m20s May 13 23:27:10.440: INFO: node status heartbeat is unchanged for 2.997699114s, waiting for 1m20s May 13 23:27:11.438: INFO: node status heartbeat is unchanged for 3.995930433s, waiting for 1m20s May 13 23:27:12.440: INFO: node status heartbeat is unchanged for 4.997508442s, waiting for 1m20s May 13 23:27:13.439: INFO: node status heartbeat is unchanged for 5.996908593s, waiting for 1m20s May 13 23:27:14.440: INFO: node status heartbeat is unchanged for 6.997544816s, waiting for 1m20s May 13 23:27:15.441: INFO: node status heartbeat is unchanged for 7.999152988s, waiting for 1m20s May 13 23:27:16.438: INFO: node status heartbeat is unchanged for 8.996056343s, waiting for 1m20s May 13 23:27:17.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:27:17.445: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:06 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:16 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:27:18.440: INFO: node status heartbeat is unchanged for 999.503611ms, waiting for 1m20s May 13 23:27:19.440: INFO: node status heartbeat is unchanged for 1.999653031s, waiting for 1m20s May 13 23:27:20.441: INFO: node status heartbeat is unchanged for 3.000235585s, waiting for 1m20s May 13 23:27:21.440: INFO: node status heartbeat is unchanged for 3.999652845s, waiting for 1m20s May 13 23:27:22.441: INFO: node status heartbeat is unchanged for 5.000498244s, waiting for 1m20s May 13 23:27:23.439: INFO: node status heartbeat is unchanged for 5.998463848s, waiting for 1m20s May 13 23:27:24.440: INFO: node status heartbeat is unchanged for 6.999682507s, waiting for 1m20s May 13 23:27:25.438: INFO: node status heartbeat is unchanged for 7.998074777s, waiting for 1m20s May 13 23:27:26.439: INFO: node status heartbeat is unchanged for 8.998539958s, waiting for 1m20s May 13 23:27:27.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:27:27.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:16 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:26 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:27:28.440: INFO: node status heartbeat is unchanged for 999.836407ms, waiting for 1m20s May 13 23:27:29.440: INFO: node status heartbeat is unchanged for 1.999855787s, waiting for 1m20s May 13 23:27:30.441: INFO: node status heartbeat is unchanged for 3.001442664s, waiting for 1m20s May 13 23:27:31.440: INFO: node status heartbeat is unchanged for 4.000168981s, waiting for 1m20s May 13 23:27:32.438: INFO: node status heartbeat is unchanged for 4.997703253s, waiting for 1m20s May 13 23:27:33.439: INFO: node status heartbeat is unchanged for 5.99902467s, waiting for 1m20s May 13 23:27:34.441: INFO: node status heartbeat is unchanged for 7.000906076s, waiting for 1m20s May 13 23:27:35.442: INFO: node status heartbeat is unchanged for 8.00180732s, waiting for 1m20s May 13 23:27:36.438: INFO: node status heartbeat is unchanged for 8.99845301s, waiting for 1m20s May 13 23:27:37.439: INFO: node status heartbeat changed in 11s (with other status changes), waiting for 40s May 13 23:27:37.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:26 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:27:38.440: INFO: node status heartbeat is unchanged for 1.001900582s, waiting for 1m20s May 13 23:27:39.440: INFO: node status heartbeat is unchanged for 2.001802992s, waiting for 1m20s May 13 23:27:40.441: INFO: node status heartbeat is unchanged for 3.002767315s, waiting for 1m20s May 13 23:27:41.439: INFO: node status heartbeat is unchanged for 4.000492259s, waiting for 1m20s May 13 23:27:42.440: INFO: node status heartbeat is unchanged for 5.001381224s, waiting for 1m20s May 13 23:27:43.440: INFO: node status heartbeat is unchanged for 6.001298415s, waiting for 1m20s May 13 23:27:44.441: INFO: node status heartbeat is unchanged for 7.002990754s, waiting for 1m20s May 13 23:27:45.440: INFO: node status heartbeat is unchanged for 8.001796081s, waiting for 1m20s May 13 23:27:46.438: INFO: node status heartbeat is unchanged for 8.999495168s, waiting for 1m20s May 13 23:27:47.460: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:27:47.464: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:27:48.440: INFO: node status heartbeat is unchanged for 980.099156ms, waiting for 1m20s May 13 23:27:49.439: INFO: node status heartbeat is unchanged for 1.978896888s, waiting for 1m20s May 13 23:27:50.439: INFO: node status heartbeat is unchanged for 2.978974653s, waiting for 1m20s May 13 23:27:51.439: INFO: node status heartbeat is unchanged for 3.978963974s, waiting for 1m20s May 13 23:27:52.439: INFO: node status heartbeat is unchanged for 4.978964794s, waiting for 1m20s May 13 23:27:53.439: INFO: node status heartbeat is unchanged for 5.978802942s, waiting for 1m20s May 13 23:27:54.439: INFO: node status heartbeat is unchanged for 6.979094934s, waiting for 1m20s May 13 23:27:55.439: INFO: node status heartbeat is unchanged for 7.979284835s, waiting for 1m20s May 13 23:27:56.438: INFO: node status heartbeat is unchanged for 8.978415896s, waiting for 1m20s May 13 23:27:57.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:27:57.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:27:58.440: INFO: node status heartbeat is unchanged for 1.000742778s, waiting for 1m20s May 13 23:27:59.441: INFO: node status heartbeat is unchanged for 2.001586831s, waiting for 1m20s May 13 23:28:00.441: INFO: node status heartbeat is unchanged for 3.001515031s, waiting for 1m20s May 13 23:28:01.440: INFO: node status heartbeat is unchanged for 4.000427239s, waiting for 1m20s May 13 23:28:02.440: INFO: node status heartbeat is unchanged for 5.000835888s, waiting for 1m20s May 13 23:28:03.440: INFO: node status heartbeat is unchanged for 6.00076636s, waiting for 1m20s May 13 23:28:04.441: INFO: node status heartbeat is unchanged for 7.001440922s, waiting for 1m20s May 13 23:28:05.441: INFO: node status heartbeat is unchanged for 8.002148733s, waiting for 1m20s May 13 23:28:06.438: INFO: node status heartbeat is unchanged for 8.998622493s, waiting for 1m20s May 13 23:28:07.446: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:28:07.450: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:27:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:28:08.438: INFO: node status heartbeat is unchanged for 992.371922ms, waiting for 1m20s May 13 23:28:09.439: INFO: node status heartbeat is unchanged for 1.993244006s, waiting for 1m20s May 13 23:28:10.441: INFO: node status heartbeat is unchanged for 2.995276643s, waiting for 1m20s May 13 23:28:11.439: INFO: node status heartbeat is unchanged for 3.993005532s, waiting for 1m20s May 13 23:28:12.442: INFO: node status heartbeat is unchanged for 4.99595535s, waiting for 1m20s May 13 23:28:13.440: INFO: node status heartbeat is unchanged for 5.994731347s, waiting for 1m20s May 13 23:28:14.441: INFO: node status heartbeat is unchanged for 6.994996676s, waiting for 1m20s May 13 23:28:15.441: INFO: node status heartbeat is unchanged for 7.995071183s, waiting for 1m20s May 13 23:28:16.439: INFO: node status heartbeat is unchanged for 8.992979747s, waiting for 1m20s May 13 23:28:17.441: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:28:17.446: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:28:18.439: INFO: node status heartbeat is unchanged for 997.370701ms, waiting for 1m20s May 13 23:28:19.442: INFO: node status heartbeat is unchanged for 2.000505963s, waiting for 1m20s May 13 23:28:20.442: INFO: node status heartbeat is unchanged for 3.000496966s, waiting for 1m20s May 13 23:28:21.439: INFO: node status heartbeat is unchanged for 3.997399834s, waiting for 1m20s May 13 23:28:22.442: INFO: node status heartbeat is unchanged for 5.000860325s, waiting for 1m20s May 13 23:28:23.439: INFO: node status heartbeat is unchanged for 5.998060972s, waiting for 1m20s May 13 23:28:24.439: INFO: node status heartbeat is unchanged for 6.997766814s, waiting for 1m20s May 13 23:28:25.441: INFO: node status heartbeat is unchanged for 7.999394964s, waiting for 1m20s May 13 23:28:26.440: INFO: node status heartbeat is unchanged for 8.998652436s, waiting for 1m20s May 13 23:28:27.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:28:27.445: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:28:28.440: INFO: node status heartbeat is unchanged for 999.601033ms, waiting for 1m20s May 13 23:28:29.441: INFO: node status heartbeat is unchanged for 2.000981949s, waiting for 1m20s May 13 23:28:30.439: INFO: node status heartbeat is unchanged for 2.998974228s, waiting for 1m20s May 13 23:28:31.439: INFO: node status heartbeat is unchanged for 3.998800116s, waiting for 1m20s May 13 23:28:32.439: INFO: node status heartbeat is unchanged for 4.998642343s, waiting for 1m20s May 13 23:28:33.439: INFO: node status heartbeat is unchanged for 5.99874534s, waiting for 1m20s May 13 23:28:34.441: INFO: node status heartbeat is unchanged for 7.001062796s, waiting for 1m20s May 13 23:28:35.440: INFO: node status heartbeat is unchanged for 7.999410193s, waiting for 1m20s May 13 23:28:36.439: INFO: node status heartbeat is unchanged for 8.998360562s, waiting for 1m20s May 13 23:28:37.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:28:37.445: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:28:38.439: INFO: node status heartbeat is unchanged for 999.103672ms, waiting for 1m20s May 13 23:28:39.441: INFO: node status heartbeat is unchanged for 2.00136743s, waiting for 1m20s May 13 23:28:40.441: INFO: node status heartbeat is unchanged for 3.000906686s, waiting for 1m20s May 13 23:28:41.440: INFO: node status heartbeat is unchanged for 4.000282885s, waiting for 1m20s May 13 23:28:42.440: INFO: node status heartbeat is unchanged for 5.000256463s, waiting for 1m20s May 13 23:28:43.440: INFO: node status heartbeat is unchanged for 5.999942844s, waiting for 1m20s May 13 23:28:44.441: INFO: node status heartbeat is unchanged for 7.000548796s, waiting for 1m20s May 13 23:28:45.439: INFO: node status heartbeat is unchanged for 7.998912969s, waiting for 1m20s May 13 23:28:46.438: INFO: node status heartbeat is unchanged for 8.998352764s, waiting for 1m20s May 13 23:28:47.442: INFO: node status heartbeat is unchanged for 10.001908446s, waiting for 1m20s May 13 23:28:48.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:28:48.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:28:49.440: INFO: node status heartbeat is unchanged for 1.000268001s, waiting for 1m20s May 13 23:28:50.440: INFO: node status heartbeat is unchanged for 2.000408944s, waiting for 1m20s May 13 23:28:51.440: INFO: node status heartbeat is unchanged for 3.000497189s, waiting for 1m20s May 13 23:28:52.440: INFO: node status heartbeat is unchanged for 4.000488841s, waiting for 1m20s May 13 23:28:53.441: INFO: node status heartbeat is unchanged for 5.001048937s, waiting for 1m20s May 13 23:28:54.440: INFO: node status heartbeat is unchanged for 6.000208744s, waiting for 1m20s May 13 23:28:55.439: INFO: node status heartbeat is unchanged for 6.999689958s, waiting for 1m20s May 13 23:28:56.440: INFO: node status heartbeat is unchanged for 8.000004285s, waiting for 1m20s May 13 23:28:57.439: INFO: node status heartbeat is unchanged for 8.999516858s, waiting for 1m20s May 13 23:28:58.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:28:58.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:28:59.443: INFO: node status heartbeat is unchanged for 1.002920951s, waiting for 1m20s May 13 23:29:00.441: INFO: node status heartbeat is unchanged for 2.000995878s, waiting for 1m20s May 13 23:29:01.439: INFO: node status heartbeat is unchanged for 2.999276271s, waiting for 1m20s May 13 23:29:02.440: INFO: node status heartbeat is unchanged for 4.000201782s, waiting for 1m20s May 13 23:29:03.440: INFO: node status heartbeat is unchanged for 4.999899374s, waiting for 1m20s May 13 23:29:04.440: INFO: node status heartbeat is unchanged for 6.000151547s, waiting for 1m20s May 13 23:29:05.441: INFO: node status heartbeat is unchanged for 7.001064225s, waiting for 1m20s May 13 23:29:06.439: INFO: node status heartbeat is unchanged for 7.998808907s, waiting for 1m20s May 13 23:29:07.441: INFO: node status heartbeat is unchanged for 9.001258582s, waiting for 1m20s May 13 23:29:08.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:29:08.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:28:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:29:09.440: INFO: node status heartbeat is unchanged for 1.000213964s, waiting for 1m20s May 13 23:29:10.440: INFO: node status heartbeat is unchanged for 2.001032483s, waiting for 1m20s May 13 23:29:11.439: INFO: node status heartbeat is unchanged for 2.999123832s, waiting for 1m20s May 13 23:29:12.442: INFO: node status heartbeat is unchanged for 4.002270049s, waiting for 1m20s May 13 23:29:13.440: INFO: node status heartbeat is unchanged for 5.000918021s, waiting for 1m20s May 13 23:29:14.438: INFO: node status heartbeat is unchanged for 5.998847171s, waiting for 1m20s May 13 23:29:15.441: INFO: node status heartbeat is unchanged for 7.001624748s, waiting for 1m20s May 13 23:29:16.439: INFO: node status heartbeat is unchanged for 7.999507721s, waiting for 1m20s May 13 23:29:17.438: INFO: node status heartbeat is unchanged for 8.999006401s, waiting for 1m20s May 13 23:29:18.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:29:18.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:29:19.441: INFO: node status heartbeat is unchanged for 1.001893168s, waiting for 1m20s May 13 23:29:20.442: INFO: node status heartbeat is unchanged for 2.00295907s, waiting for 1m20s May 13 23:29:21.438: INFO: node status heartbeat is unchanged for 2.999801295s, waiting for 1m20s May 13 23:29:22.439: INFO: node status heartbeat is unchanged for 3.999925277s, waiting for 1m20s May 13 23:29:23.440: INFO: node status heartbeat is unchanged for 5.00177837s, waiting for 1m20s May 13 23:29:24.441: INFO: node status heartbeat is unchanged for 6.001911764s, waiting for 1m20s May 13 23:29:25.438: INFO: node status heartbeat is unchanged for 6.999737992s, waiting for 1m20s May 13 23:29:26.440: INFO: node status heartbeat is unchanged for 8.001266214s, waiting for 1m20s May 13 23:29:27.442: INFO: node status heartbeat is unchanged for 9.00375737s, waiting for 1m20s May 13 23:29:28.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:29:28.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:29:29.440: INFO: node status heartbeat is unchanged for 1.001080363s, waiting for 1m20s May 13 23:29:30.440: INFO: node status heartbeat is unchanged for 2.001080728s, waiting for 1m20s May 13 23:29:31.439: INFO: node status heartbeat is unchanged for 3.000124401s, waiting for 1m20s May 13 23:29:32.439: INFO: node status heartbeat is unchanged for 4.000368228s, waiting for 1m20s May 13 23:29:33.439: INFO: node status heartbeat is unchanged for 5.000711251s, waiting for 1m20s May 13 23:29:34.440: INFO: node status heartbeat is unchanged for 6.00137123s, waiting for 1m20s May 13 23:29:35.438: INFO: node status heartbeat is unchanged for 6.999414037s, waiting for 1m20s May 13 23:29:36.439: INFO: node status heartbeat is unchanged for 8.000122255s, waiting for 1m20s May 13 23:29:37.441: INFO: node status heartbeat is unchanged for 9.001843095s, waiting for 1m20s May 13 23:29:38.440: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:29:38.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:27 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:37 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:29:39.439: INFO: node status heartbeat is unchanged for 999.824381ms, waiting for 1m20s May 13 23:29:40.439: INFO: node status heartbeat is unchanged for 1.999569697s, waiting for 1m20s May 13 23:29:41.438: INFO: node status heartbeat is unchanged for 2.998483677s, waiting for 1m20s May 13 23:29:42.440: INFO: node status heartbeat is unchanged for 4.000145432s, waiting for 1m20s May 13 23:29:43.440: INFO: node status heartbeat is unchanged for 5.000424843s, waiting for 1m20s May 13 23:29:44.439: INFO: node status heartbeat is unchanged for 5.999038095s, waiting for 1m20s May 13 23:29:45.439: INFO: node status heartbeat is unchanged for 6.999218128s, waiting for 1m20s May 13 23:29:46.439: INFO: node status heartbeat is unchanged for 7.999276152s, waiting for 1m20s May 13 23:29:47.441: INFO: node status heartbeat is unchanged for 9.001172861s, waiting for 1m20s May 13 23:29:48.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:29:48.443: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:37 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:47 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:29:49.439: INFO: node status heartbeat is unchanged for 999.630024ms, waiting for 1m20s May 13 23:29:50.439: INFO: node status heartbeat is unchanged for 2.000220019s, waiting for 1m20s May 13 23:29:51.440: INFO: node status heartbeat is unchanged for 3.000640593s, waiting for 1m20s May 13 23:29:52.441: INFO: node status heartbeat is unchanged for 4.001914255s, waiting for 1m20s May 13 23:29:53.439: INFO: node status heartbeat is unchanged for 5.000349307s, waiting for 1m20s May 13 23:29:54.439: INFO: node status heartbeat is unchanged for 6.000478394s, waiting for 1m20s May 13 23:29:55.439: INFO: node status heartbeat is unchanged for 7.000333983s, waiting for 1m20s May 13 23:29:56.439: INFO: node status heartbeat is unchanged for 7.99996122s, waiting for 1m20s May 13 23:29:57.439: INFO: node status heartbeat is unchanged for 9.000156665s, waiting for 1m20s May 13 23:29:58.438: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:29:58.442: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:47 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:57 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:29:59.441: INFO: node status heartbeat is unchanged for 1.002849893s, waiting for 1m20s May 13 23:30:00.439: INFO: node status heartbeat is unchanged for 2.00080287s, waiting for 1m20s May 13 23:30:01.439: INFO: node status heartbeat is unchanged for 3.001274894s, waiting for 1m20s May 13 23:30:02.440: INFO: node status heartbeat is unchanged for 4.002025746s, waiting for 1m20s May 13 23:30:03.439: INFO: node status heartbeat is unchanged for 5.001159862s, waiting for 1m20s May 13 23:30:04.438: INFO: node status heartbeat is unchanged for 6.0005572s, waiting for 1m20s May 13 23:30:05.439: INFO: node status heartbeat is unchanged for 7.001072465s, waiting for 1m20s May 13 23:30:06.438: INFO: node status heartbeat is unchanged for 8.000241988s, waiting for 1m20s May 13 23:30:07.438: INFO: node status heartbeat is unchanged for 9.000597503s, waiting for 1m20s May 13 23:30:08.439: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:30:08.444: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:29:57 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:07 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:30:09.440: INFO: node status heartbeat is unchanged for 1.000601077s, waiting for 1m20s May 13 23:30:10.441: INFO: node status heartbeat is unchanged for 2.001729275s, waiting for 1m20s May 13 23:30:11.439: INFO: node status heartbeat is unchanged for 2.999421469s, waiting for 1m20s May 13 23:30:12.441: INFO: node status heartbeat is unchanged for 4.001651956s, waiting for 1m20s May 13 23:30:13.439: INFO: node status heartbeat is unchanged for 4.999700363s, waiting for 1m20s May 13 23:30:14.440: INFO: node status heartbeat is unchanged for 6.000584749s, waiting for 1m20s May 13 23:30:15.441: INFO: node status heartbeat is unchanged for 7.002026409s, waiting for 1m20s May 13 23:30:16.438: INFO: node status heartbeat is unchanged for 7.999086286s, waiting for 1m20s May 13 23:30:17.441: INFO: node status heartbeat is unchanged for 9.001576221s, waiting for 1m20s May 13 23:30:18.442: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:30:18.446: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:07 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:17 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:30:19.441: INFO: node status heartbeat is unchanged for 999.121028ms, waiting for 1m20s May 13 23:30:20.442: INFO: node status heartbeat is unchanged for 2.000026515s, waiting for 1m20s May 13 23:30:21.439: INFO: node status heartbeat is unchanged for 2.9969232s, waiting for 1m20s May 13 23:30:22.441: INFO: node status heartbeat is unchanged for 3.999626235s, waiting for 1m20s May 13 23:30:23.439: INFO: node status heartbeat is unchanged for 4.997581289s, waiting for 1m20s May 13 23:30:24.438: INFO: node status heartbeat is unchanged for 5.996574145s, waiting for 1m20s May 13 23:30:25.441: INFO: node status heartbeat is unchanged for 6.999577526s, waiting for 1m20s May 13 23:30:26.438: INFO: node status heartbeat is unchanged for 7.996885804s, waiting for 1m20s May 13 23:30:27.439: INFO: node status heartbeat is unchanged for 8.997131383s, waiting for 1m20s May 13 23:30:28.438: INFO: node status heartbeat changed in 10s (with other status changes), waiting for 40s May 13 23:30:28.442: INFO: v1.NodeStatus{ Capacity: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "80", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "440625980Ki", Format: "BinarySI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Allocatable: {s"cmk.intel.com/exclusive-cores": {i: {...}, s: "3", Format: "DecimalSI"}, s"cpu": {i: {...}, s: "77", Format: "DecimalSI"}, s"ephemeral-storage": {i: {...}, s: "406080902496", Format: "DecimalSI"}, s"hugepages-1Gi": {s: "0", Format: "DecimalSI"}, ...}, Phase: "", Conditions: []v1.NodeCondition{ {Type: "NetworkUnavailable", Status: "False", LastHeartbeatTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 20:03:19 +0000 UTC"}, ...}, { Type: "MemoryPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientMemory", Message: "kubelet has sufficient memory available", }, { Type: "DiskPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasNoDiskPressure", Message: "kubelet has no disk pressure", }, { Type: "PIDPressure", Status: "False", - LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:17 +0000 UTC"}, + LastHeartbeatTime: v1.Time{Time: s"2022-05-13 23:30:27 +0000 UTC"}, LastTransitionTime: {Time: s"2022-05-13 19:59:24 +0000 UTC"}, Reason: "KubeletHasSufficientPID", Message: "kubelet has sufficient PID available", }, {Type: "Ready", Status: "True", LastTransitionTime: {Time: s"2022-05-13 20:00:35 +0000 UTC"}, Reason: "KubeletReady", ...}, }, Addresses: {{Type: "InternalIP", Address: "10.10.190.208"}, {Type: "Hostname", Address: "node2"}}, DaemonEndpoints: {KubeletEndpoint: {Port: 10250}}, ... // 5 identical fields } May 13 23:30:29.442: INFO: node status heartbeat is unchanged for 1.004122897s, waiting for 1m20s May 13 23:30:30.440: INFO: node status heartbeat is unchanged for 2.002683191s, waiting for 1m20s May 13 23:30:31.439: INFO: node status heartbeat is unchanged for 3.001395146s, waiting for 1m20s May 13 23:30:32.440: INFO: node status heartbeat is unchanged for 4.002828244s, waiting for 1m20s May 13 23:30:33.439: INFO: node status heartbeat is unchanged for 5.001163568s, waiting for 1m20s May 13 23:30:33.442: INFO: node status heartbeat is unchanged for 5.004860093s, waiting for 1m20s STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:30:33.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-6675" for this suite. • [SLOW TEST:300.058 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":-1,"completed":8,"skipped":828,"failed":0} May 13 23:30:33.464: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:25:00.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 May 13 23:25:00.227: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:02.231: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:04.232: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) May 13 23:25:06.231: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 May 13 23:26:57.423: INFO: getRestartDelay: restartCount = 4, finishedAt=2022-05-13 23:26:10 +0000 UTC restartedAt=2022-05-13 23:26:53 +0000 UTC (43s) STEP: getting restart delay-1 May 13 23:28:28.779: INFO: getRestartDelay: restartCount = 5, finishedAt=2022-05-13 23:26:58 +0000 UTC restartedAt=2022-05-13 23:28:27 +0000 UTC (1m29s) STEP: getting restart delay-2 May 13 23:31:20.530: INFO: getRestartDelay: restartCount = 6, finishedAt=2022-05-13 23:28:32 +0000 UTC restartedAt=2022-05-13 23:31:19 +0000 UTC (2m47s) STEP: updating the image May 13 23:31:21.040: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update May 13 23:31:45.112: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-05-13 23:31:30 +0000 UTC restartedAt=2022-05-13 23:31:43 +0000 UTC (13s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:31:45.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4788" for this suite. • [SLOW TEST:404.932 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:681 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":-1,"completed":4,"skipped":432,"failed":0} May 13 23:31:45.123: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 13 23:24:38.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods W0513 23:24:38.313491 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 13 23:24:38.313: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 13 23:24:38.315: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 May 13 23:24:38.332: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:40.335: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:42.335: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:44.336: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:46.335: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) May 13 23:24:48.336: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped May 13 23:36:21.746: INFO: getRestartDelay: restartCount = 7, finishedAt=2022-05-13 23:31:15 +0000 UTC restartedAt=2022-05-13 23:36:20 +0000 UTC (5m5s) May 13 23:41:36.104: INFO: getRestartDelay: restartCount = 8, finishedAt=2022-05-13 23:36:25 +0000 UTC restartedAt=2022-05-13 23:41:34 +0000 UTC (5m9s) May 13 23:46:44.474: INFO: getRestartDelay: restartCount = 9, finishedAt=2022-05-13 23:41:39 +0000 UTC restartedAt=2022-05-13 23:46:43 +0000 UTC (5m4s) STEP: getting restart delay after a capped delay May 13 23:51:52.829: INFO: getRestartDelay: restartCount = 10, finishedAt=2022-05-13 23:46:48 +0000 UTC restartedAt=2022-05-13 23:51:51 +0000 UTC (5m3s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 13 23:51:52.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8648" for this suite. • [SLOW TEST:1634.552 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:722 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":-1,"completed":1,"skipped":94,"failed":0} May 13 23:51:52.847: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":-1,"completed":10,"skipped":537,"failed":0} May 13 23:27:42.663: INFO: Running AfterSuite actions on all nodes May 13 23:51:52.898: INFO: Running AfterSuite actions on node 1 May 13 23:51:52.899: INFO: Skipping dumping logs from cluster Ran 53 of 5773 Specs in 1635.152 seconds SUCCESS! -- 53 Passed | 0 Failed | 0 Pending | 5720 Skipped Ginkgo ran 1 suite in 27m16.761460224s Test Suite Passed