I0321 23:21:14.351567 7 e2e.go:129] Starting e2e run "4baf5795-322c-4329-af5f-90739012863f" on Ginkgo node 1 {"msg":"Test Suite starting","total":58,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616368872 - Will randomize all specs Will run 58 of 5737 specs Mar 21 23:21:14.416: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:14.418: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 21 23:21:14.475: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 21 23:21:14.600: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 21 23:21:14.600: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 21 23:21:14.600: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 21 23:21:14.885: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 21 23:21:14.885: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 21 23:21:14.885: INFO: e2e test version: v1.21.0-beta.1 Mar 21 23:21:14.886: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 21 23:21:14.886: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:21:15.061: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:15.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context Mar 21 23:21:16.058: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 21 23:21:16.102: INFO: Waiting up to 5m0s for pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf" in namespace "security-context-3727" to be "Succeeded or Failed" Mar 21 23:21:16.202: INFO: Pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf": Phase="Pending", Reason="", readiness=false. Elapsed: 100.453069ms Mar 21 23:21:18.569: INFO: Pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467468754s Mar 21 23:21:20.791: INFO: Pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689691433s Mar 21 23:21:23.329: INFO: Pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf": Phase="Running", Reason="", readiness=true. Elapsed: 7.227573352s Mar 21 23:21:25.592: INFO: Pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.490248832s STEP: Saw pod success Mar 21 23:21:25.592: INFO: Pod "security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf" satisfied condition "Succeeded or Failed" Mar 21 23:21:25.595: INFO: Trying to get logs from node latest-worker pod security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf container test-container: STEP: delete the pod Mar 21 23:21:26.640: INFO: Waiting for pod security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf to disappear Mar 21 23:21:26.688: INFO: Pod security-context-8fe4d649-dffd-4ae4-bf93-7c03468759cf no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:21:26.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3727" for this suite. • [SLOW TEST:11.773 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":58,"completed":1,"skipped":82,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:26.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Mar 21 23:21:27.198: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9" in namespace "security-context-test-4351" to be "Succeeded or Failed" Mar 21 23:21:27.271: INFO: Pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9": Phase="Pending", Reason="", readiness=false. Elapsed: 71.961443ms Mar 21 23:21:29.661: INFO: Pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.462478443s Mar 21 23:21:32.047: INFO: Pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.848650247s Mar 21 23:21:34.593: INFO: Pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.3941126s Mar 21 23:21:36.795: INFO: Pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.596514134s Mar 21 23:21:36.795: INFO: Pod "alpine-nnp-true-37b4b4e3-29c3-416e-9281-a6e602c882e9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:21:36.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4351" for this suite. • [SLOW TEST:10.183 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":58,"completed":2,"skipped":160,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] SSH should SSH to all nodes and run commands /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:37.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Mar 21 23:21:37.351: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:21:37.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-4020" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.530 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:21:37.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 in namespace kubelet-6692 Mar 21 23:21:38.171: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:21:38.184: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0321 23:21:38.298130 7 runners.go:190] Created replication controller with name: cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1, namespace: kubelet-6692, replica count: 20 Mar 21 23:21:38.487: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:21:43.939: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:21:44.026: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:21:44.530: INFO: Missing info/stats for container "runtime" on node "latest-worker" I0321 23:21:48.349014 7 runners.go:190] cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 Pods: 20 out of 20 created, 1 running, 19 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 23:21:49.846: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:21:49.865: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:21:49.879: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:21:55.664: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:21:56.771: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:21:56.793: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0321 23:21:58.349756 7 runners.go:190] cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 Pods: 20 out of 20 created, 5 running, 15 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 23:22:01.945: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:02.481: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:03.082: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:07.130: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" I0321 23:22:08.350396 7 runners.go:190] cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 Pods: 20 out of 20 created, 13 running, 7 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 23:22:08.624: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:08.658: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:12.818: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:14.131: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:14.325: INFO: Missing info/stats for container "runtime" on node "latest-worker" I0321 23:22:18.350596 7 runners.go:190] cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 21 23:22:18.574: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:19.351: INFO: Checking pods on node latest-worker2 via /runningpods endpoint Mar 21 23:22:19.351: INFO: Checking pods on node latest-worker via /runningpods endpoint Mar 21 23:22:20.261: INFO: [Resource usage on node "latest-control-plane" is not ready yet, Resource usage on node "latest-worker" is not ready yet, Resource usage on node "latest-worker2" is not ready yet] Mar 21 23:22:20.261: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 in namespace kubelet-6692, will wait for the garbage collector to delete the pods Mar 21 23:22:20.390: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:20.393: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:23.071: INFO: Deleting ReplicationController cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 took: 2.0690259s Mar 21 23:22:24.173: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:24.572: INFO: Terminating ReplicationController cleanup20-55a6bd0c-da11-47df-a776-20912d13d1c1 pods took: 1.500622565s Mar 21 23:22:25.948: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:26.299: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:29.426: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:31.287: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:31.889: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:34.718: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:36.750: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:37.605: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:40.394: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:42.064: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:42.694: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:45.718: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:47.330: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:48.135: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:51.208: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:52.614: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:53.518: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:22:56.392: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:22:58.294: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:22:59.294: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:01.826: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:03.666: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:04.890: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:07.010: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:08.768: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:10.081: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:12.179: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:14.116: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:15.185: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:17.689: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:19.175: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:20.648: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:22.760: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:24.245: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:25.821: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:27.949: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:30.047: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:30.973: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:33.186: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:35.256: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:36.060: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:38.480: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:40.319: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:41.150: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:43.599: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 21 23:23:45.381: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 21 23:23:46.260: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 21 23:23:47.473: INFO: Checking pods on node latest-worker2 via /runningpods endpoint Mar 21 23:23:47.473: INFO: Checking pods on node latest-worker via /runningpods endpoint Mar 21 23:23:47.515: INFO: Deleting 20 pods on 2 nodes completed in 1.042517056s after the RC was deleted Mar 21 23:23:47.515: INFO: CPU usage of containers on node "latest-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.535 0.753 0.918 1.010 1.319 1.319 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "latest-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.175 0.399 0.491 0.497 0.506 0.506 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "latest-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.312 0.390 0.486 0.571 0.579 0.579 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node latest-worker STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node latest-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:23:47.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-6692" for this suite. • [SLOW TEST:130.255 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":58,"completed":3,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:47.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Mar 21 23:23:48.155: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-d19b9303-b73a-4f79-a5bf-8350a5d2c2a2" in namespace "security-context-test-8044" to be "Succeeded or Failed" Mar 21 23:23:48.160: INFO: Pod "busybox-readonly-true-d19b9303-b73a-4f79-a5bf-8350a5d2c2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.639522ms Mar 21 23:23:50.181: INFO: Pod "busybox-readonly-true-d19b9303-b73a-4f79-a5bf-8350a5d2c2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026468889s Mar 21 23:23:52.343: INFO: Pod "busybox-readonly-true-d19b9303-b73a-4f79-a5bf-8350a5d2c2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188009494s Mar 21 23:23:54.516: INFO: Pod "busybox-readonly-true-d19b9303-b73a-4f79-a5bf-8350a5d2c2a2": Phase="Failed", Reason="", readiness=false. Elapsed: 6.361295239s Mar 21 23:23:54.516: INFO: Pod "busybox-readonly-true-d19b9303-b73a-4f79-a5bf-8350a5d2c2a2" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:23:54.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8044" for this suite. • [SLOW TEST:6.999 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":58,"completed":4,"skipped":359,"failed":0} SSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:23:54.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Mar 21 23:23:55.819: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a" in namespace "security-context-test-6673" to be "Succeeded or Failed" Mar 21 23:23:55.880: INFO: Pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a": Phase="Pending", Reason="", readiness=false. Elapsed: 60.796423ms Mar 21 23:23:57.939: INFO: Pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120042429s Mar 21 23:23:59.990: INFO: Pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170565924s Mar 21 23:24:02.270: INFO: Pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a": Phase="Running", Reason="", readiness=true. Elapsed: 6.450821812s Mar 21 23:24:04.558: INFO: Pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.739099751s Mar 21 23:24:04.559: INFO: Pod "alpine-nnp-nil-a30e9316-ac80-4f98-a6da-4964253e048a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:24:04.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6673" for this suite. • [SLOW TEST:10.734 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":58,"completed":5,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:24:05.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Mar 21 23:24:07.122: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Mar 21 23:24:07.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-6505 create -f -' Mar 21 23:24:19.352: INFO: stderr: "" Mar 21 23:24:19.352: INFO: stdout: "pod/liveness-exec created\n" Mar 21 23:24:19.352: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-6505 create -f -' Mar 21 23:24:19.694: INFO: stderr: "" Mar 21 23:24:19.694: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Mar 21 23:24:26.559: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:28.259: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:29.463: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:30.381: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:31.493: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:32.467: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:33.570: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:34.552: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:35.600: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:36.726: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:37.847: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:39.195: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:40.175: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:41.613: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:42.421: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:44.065: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:44.554: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:46.115: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:46.558: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:48.559: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:48.615: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:50.862: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:50.863: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:53.613: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:53.613: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:55.660: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:55.661: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:57.908: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:24:57.908: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:59.954: INFO: Pod: liveness-http, restart count:0 Mar 21 23:24:59.954: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:02.074: INFO: Pod: liveness-http, restart count:0 Mar 21 23:25:02.074: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:04.081: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:04.081: INFO: Pod: liveness-http, restart count:0 Mar 21 23:25:07.797: INFO: Pod: liveness-http, restart count:0 Mar 21 23:25:07.798: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:09.825: INFO: Pod: liveness-http, restart count:1 Mar 21 23:25:09.825: INFO: Saw liveness-http restart, succeeded... Mar 21 23:25:09.825: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:11.898: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:14.883: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:16.923: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:18.927: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:21.033: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:23.098: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:25.104: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:27.128: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:29.207: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:31.257: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:33.287: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:35.352: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:37.435: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:39.440: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:41.482: INFO: Pod: liveness-exec, restart count:0 Mar 21 23:25:43.513: INFO: Pod: liveness-exec, restart count:1 Mar 21 23:25:43.513: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:25:43.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-6505" for this suite. • [SLOW TEST:98.262 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":58,"completed":6,"skipped":441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:25:43.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 21 23:25:44.139: INFO: Waiting up to 5m0s for pod "security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929" in namespace "security-context-947" to be "Succeeded or Failed" Mar 21 23:25:44.199: INFO: Pod "security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929": Phase="Pending", Reason="", readiness=false. Elapsed: 59.346845ms Mar 21 23:25:46.214: INFO: Pod "security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07456617s Mar 21 23:25:48.251: INFO: Pod "security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111303481s Mar 21 23:25:50.518: INFO: Pod "security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.378327883s STEP: Saw pod success Mar 21 23:25:50.518: INFO: Pod "security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929" satisfied condition "Succeeded or Failed" Mar 21 23:25:50.523: INFO: Trying to get logs from node latest-worker2 pod security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929 container test-container: STEP: delete the pod Mar 21 23:25:50.659: INFO: Waiting for pod security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929 to disappear Mar 21 23:25:50.727: INFO: Pod security-context-1a1b1631-a1cf-43f2-b6b5-f609be0e3929 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:25:50.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-947" for this suite. • [SLOW TEST:7.056 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":58,"completed":7,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:25:50.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 Mar 21 23:26:15.223: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:17.228: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:19.233: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:21.242: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:23.243: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:25.236: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:27.256: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:29.297: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:31.232: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:33.239: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:35.278: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:37.254: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:39.249: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:41.237: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:43.292: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = false) Mar 21 23:26:45.282: INFO: The status of Pod startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 is Running (Ready = true) Mar 21 23:26:45.349: INFO: Container started at 2021-03-21 23:26:15.218860522 +0000 UTC m=+302.262406065, pod became ready at 2021-03-21 23:26:45.282871083 +0000 UTC m=+332.326416466, 30.064010401s after startupProbe succeeded Mar 21 23:26:45.350: FAIL: Pod became ready in 30.064010401s, more than 5s after startupProbe succeeded. It means that the delay readiness probes were not initiated immediately after startup finished. Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00213e300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00213e300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc00213e300, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-2125". STEP: Found 5 events. Mar 21 23:26:45.423: INFO: At 2021-03-21 23:25:51 +0000 UTC - event for startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1: {default-scheduler } Scheduled: Successfully assigned container-probe-2125/startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 to latest-worker2 Mar 21 23:26:45.423: INFO: At 2021-03-21 23:25:53 +0000 UTC - event for startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29" already present on machine Mar 21 23:26:45.423: INFO: At 2021-03-21 23:25:54 +0000 UTC - event for startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1: {kubelet latest-worker2} Created: Created container busybox Mar 21 23:26:45.423: INFO: At 2021-03-21 23:25:54 +0000 UTC - event for startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1: {kubelet latest-worker2} Started: Started container busybox Mar 21 23:26:45.423: INFO: At 2021-03-21 23:26:04 +0000 UTC - event for startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1: {kubelet latest-worker2} Unhealthy: Startup probe failed: cat: can't open '/tmp/startup': No such file or directory Mar 21 23:26:45.526: INFO: POD NODE PHASE GRACE CONDITIONS Mar 21 23:26:45.526: INFO: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:25:51 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:26:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:26:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-21 23:25:51 +0000 UTC }] Mar 21 23:26:45.526: INFO: Mar 21 23:26:45.578: INFO: Logging node info for node latest-control-plane Mar 21 23:26:45.651: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane 490b9532-4cb6-4803-8805-500c50bef538 6921571 0 2021-02-19 10:11:38 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-02-19 10:11:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-02-19 10:11:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-02-19 10:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:11:38 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:24:28 +0000 UTC,LastTransitionTime:2021-02-19 10:12:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.14,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2277e49732264d9b915753a27b5b08cc,SystemUUID:3fcd47a6-9190-448f-a26a-9823c0424f23,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:26:45.652: INFO: Logging kubelet events for node latest-control-plane Mar 21 23:26:45.723: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 21 23:26:45.829: INFO: etcd-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container etcd ready: true, restart count 0 Mar 21 23:26:45.829: INFO: kube-proxy-6jdsd started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:26:45.829: INFO: kube-controller-manager-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 21 23:26:45.829: INFO: kube-scheduler-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container kube-scheduler ready: true, restart count 0 Mar 21 23:26:45.829: INFO: kube-apiserver-latest-control-plane started at 2021-02-19 10:11:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container kube-apiserver ready: true, restart count 0 Mar 21 23:26:45.829: INFO: kindnet-94zqp started at 2021-02-19 10:11:54 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:26:45.829: INFO: coredns-74ff55c5b-lv4vw started at 2021-03-21 23:24:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container coredns ready: true, restart count 0 Mar 21 23:26:45.829: INFO: local-path-provisioner-8b46957d4-54gls started at 2021-02-19 10:12:21 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:45.829: INFO: Container local-path-provisioner ready: true, restart count 0 W0321 23:26:45.917675 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:26:46.062: INFO: Latency metrics for node latest-control-plane Mar 21 23:26:46.062: INFO: Logging node info for node latest-worker Mar 21 23:26:46.107: INFO: Node Info: &Node{ObjectMeta:{latest-worker 52cd6d4b-d53f-435d-801a-04c2822dec44 6923175 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-11":"csi-mock-csi-mock-volumes-11","csi-mock-csi-mock-volumes-1170":"csi-mock-csi-mock-volumes-1170","csi-mock-csi-mock-volumes-1204":"csi-mock-csi-mock-volumes-1204","csi-mock-csi-mock-volumes-1243":"csi-mock-csi-mock-volumes-1243","csi-mock-csi-mock-volumes-135":"csi-mock-csi-mock-volumes-135","csi-mock-csi-mock-volumes-1517":"csi-mock-csi-mock-volumes-1517","csi-mock-csi-mock-volumes-1561":"csi-mock-csi-mock-volumes-1561","csi-mock-csi-mock-volumes-1595":"csi-mock-csi-mock-volumes-1595","csi-mock-csi-mock-volumes-1714":"csi-mock-csi-mock-volumes-1714","csi-mock-csi-mock-volumes-1732":"csi-mock-csi-mock-volumes-1732","csi-mock-csi-mock-volumes-1876":"csi-mock-csi-mock-volumes-1876","csi-mock-csi-mock-volumes-1945":"csi-mock-csi-mock-volumes-1945","csi-mock-csi-mock-volumes-2041":"csi-mock-csi-mock-volumes-2041","csi-mock-csi-mock-volumes-2057":"csi-mock-csi-mock-volumes-2057","csi-mock-csi-mock-volumes-2208":"csi-mock-csi-mock-volumes-2208","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2459":"csi-mock-csi-mock-volumes-2459","csi-mock-csi-mock-volumes-2511":"csi-mock-csi-mock-volumes-2511","csi-mock-csi-mock-volumes-2706":"csi-mock-csi-mock-volumes-2706","csi-mock-csi-mock-volumes-2710":"csi-mock-csi-mock-volumes-2710","csi-mock-csi-mock-volumes-273":"csi-mock-csi-mock-volumes-273","csi-mock-csi-mock-volumes-2782":"csi-mock-csi-mock-volumes-2782","csi-mock-csi-mock-volumes-2797":"csi-mock-csi-mock-volumes-2797","csi-mock-csi-mock-volumes-2805":"csi-mock-csi-mock-volumes-2805","csi-mock-csi-mock-volumes-2843":"csi-mock-csi-mock-volumes-2843","csi-mock-csi-mock-volumes-3075":"csi-mock-csi-mock-volumes-3075","csi-mock-csi-mock-volumes-3107":"csi-mock-csi-mock-volumes-3107","csi-mock-csi-mock-volumes-3115":"csi-mock-csi-mock-volumes-3115","csi-mock-csi-mock-volumes-3164":"csi-mock-csi-mock-volumes-3164","csi-mock-csi-mock-volumes-3202":"csi-mock-csi-mock-volumes-3202","csi-mock-csi-mock-volumes-3235":"csi-mock-csi-mock-volumes-3235","csi-mock-csi-mock-volumes-3298":"csi-mock-csi-mock-volumes-3298","csi-mock-csi-mock-volumes-3313":"csi-mock-csi-mock-volumes-3313","csi-mock-csi-mock-volumes-3364":"csi-mock-csi-mock-volumes-3364","csi-mock-csi-mock-volumes-3488":"csi-mock-csi-mock-volumes-3488","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3546":"csi-mock-csi-mock-volumes-3546","csi-mock-csi-mock-volumes-3568":"csi-mock-csi-mock-volumes-3568","csi-mock-csi-mock-volumes-3595":"csi-mock-csi-mock-volumes-3595","csi-mock-csi-mock-volumes-3615":"csi-mock-csi-mock-volumes-3615","csi-mock-csi-mock-volumes-3622":"csi-mock-csi-mock-volumes-3622","csi-mock-csi-mock-volumes-3660":"csi-mock-csi-mock-volumes-3660","csi-mock-csi-mock-volumes-3738":"csi-mock-csi-mock-volumes-3738","csi-mock-csi-mock-volumes-380":"csi-mock-csi-mock-volumes-380","csi-mock-csi-mock-volumes-3905":"csi-mock-csi-mock-volumes-3905","csi-mock-csi-mock-volumes-3983":"csi-mock-csi-mock-volumes-3983","csi-mock-csi-mock-volumes-4658":"csi-mock-csi-mock-volumes-4658","csi-mock-csi-mock-volumes-4689":"csi-mock-csi-mock-volumes-4689","csi-mock-csi-mock-volumes-4839":"csi-mock-csi-mock-volumes-4839","csi-mock-csi-mock-volumes-4871":"csi-mock-csi-mock-volumes-4871","csi-mock-csi-mock-volumes-4885":"csi-mock-csi-mock-volumes-4885","csi-mock-csi-mock-volumes-4888":"csi-mock-csi-mock-volumes-4888","csi-mock-csi-mock-volumes-5028":"csi-mock-csi-mock-volumes-5028","csi-mock-csi-mock-volumes-5118":"csi-mock-csi-mock-volumes-5118","csi-mock-csi-mock-volumes-5120":"csi-mock-csi-mock-volumes-5120","csi-mock-csi-mock-volumes-5160":"csi-mock-csi-mock-volumes-5160","csi-mock-csi-mock-volumes-5164":"csi-mock-csi-mock-volumes-5164","csi-mock-csi-mock-volumes-5225":"csi-mock-csi-mock-volumes-5225","csi-mock-csi-mock-volumes-526":"csi-mock-csi-mock-volumes-526","csi-mock-csi-mock-volumes-5365":"csi-mock-csi-mock-volumes-5365","csi-mock-csi-mock-volumes-5399":"csi-mock-csi-mock-volumes-5399","csi-mock-csi-mock-volumes-5443":"csi-mock-csi-mock-volumes-5443","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5561":"csi-mock-csi-mock-volumes-5561","csi-mock-csi-mock-volumes-5608":"csi-mock-csi-mock-volumes-5608","csi-mock-csi-mock-volumes-5652":"csi-mock-csi-mock-volumes-5652","csi-mock-csi-mock-volumes-5672":"csi-mock-csi-mock-volumes-5672","csi-mock-csi-mock-volumes-569":"csi-mock-csi-mock-volumes-569","csi-mock-csi-mock-volumes-5759":"csi-mock-csi-mock-volumes-5759","csi-mock-csi-mock-volumes-5910":"csi-mock-csi-mock-volumes-5910","csi-mock-csi-mock-volumes-6046":"csi-mock-csi-mock-volumes-6046","csi-mock-csi-mock-volumes-6099":"csi-mock-csi-mock-volumes-6099","csi-mock-csi-mock-volumes-621":"csi-mock-csi-mock-volumes-621","csi-mock-csi-mock-volumes-6347":"csi-mock-csi-mock-volumes-6347","csi-mock-csi-mock-volumes-6447":"csi-mock-csi-mock-volumes-6447","csi-mock-csi-mock-volumes-6752":"csi-mock-csi-mock-volumes-6752","csi-mock-csi-mock-volumes-6763":"csi-mock-csi-mock-volumes-6763","csi-mock-csi-mock-volumes-7184":"csi-mock-csi-mock-volumes-7184","csi-mock-csi-mock-volumes-7244":"csi-mock-csi-mock-volumes-7244","csi-mock-csi-mock-volumes-7259":"csi-mock-csi-mock-volumes-7259","csi-mock-csi-mock-volumes-726":"csi-mock-csi-mock-volumes-726","csi-mock-csi-mock-volumes-7302":"csi-mock-csi-mock-volumes-7302","csi-mock-csi-mock-volumes-7346":"csi-mock-csi-mock-volumes-7346","csi-mock-csi-mock-volumes-7378":"csi-mock-csi-mock-volumes-7378","csi-mock-csi-mock-volumes-7385":"csi-mock-csi-mock-volumes-7385","csi-mock-csi-mock-volumes-746":"csi-mock-csi-mock-volumes-746","csi-mock-csi-mock-volumes-7574":"csi-mock-csi-mock-volumes-7574","csi-mock-csi-mock-volumes-7712":"csi-mock-csi-mock-volumes-7712","csi-mock-csi-mock-volumes-7749":"csi-mock-csi-mock-volumes-7749","csi-mock-csi-mock-volumes-7820":"csi-mock-csi-mock-volumes-7820","csi-mock-csi-mock-volumes-7946":"csi-mock-csi-mock-volumes-7946","csi-mock-csi-mock-volumes-8159":"csi-mock-csi-mock-volumes-8159","csi-mock-csi-mock-volumes-8382":"csi-mock-csi-mock-volumes-8382","csi-mock-csi-mock-volumes-8458":"csi-mock-csi-mock-volumes-8458","csi-mock-csi-mock-volumes-8569":"csi-mock-csi-mock-volumes-8569","csi-mock-csi-mock-volumes-8626":"csi-mock-csi-mock-volumes-8626","csi-mock-csi-mock-volumes-8627":"csi-mock-csi-mock-volumes-8627","csi-mock-csi-mock-volumes-8774":"csi-mock-csi-mock-volumes-8774","csi-mock-csi-mock-volumes-8777":"csi-mock-csi-mock-volumes-8777","csi-mock-csi-mock-volumes-8880":"csi-mock-csi-mock-volumes-8880","csi-mock-csi-mock-volumes-923":"csi-mock-csi-mock-volumes-923","csi-mock-csi-mock-volumes-9279":"csi-mock-csi-mock-volumes-9279","csi-mock-csi-mock-volumes-9372":"csi-mock-csi-mock-volumes-9372","csi-mock-csi-mock-volumes-9687":"csi-mock-csi-mock-volumes-9687","csi-mock-csi-mock-volumes-969":"csi-mock-csi-mock-volumes-969","csi-mock-csi-mock-volumes-983":"csi-mock-csi-mock-volumes-983","csi-mock-csi-mock-volumes-9858":"csi-mock-csi-mock-volumes-9858","csi-mock-csi-mock-volumes-9983":"csi-mock-csi-mock-volumes-9983","csi-mock-csi-mock-volumes-9992":"csi-mock-csi-mock-volumes-9992","csi-mock-csi-mock-volumes-9995":"csi-mock-csi-mock-volumes-9995"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 19:07:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.9,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9501a3ffa7bf40adaca8f4cb3d1dcc93,SystemUUID:6921bb21-67c9-42ea-b514-81b1daac5968,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[docker.io/bitnami/kubectl@sha256:471a2f06834db208d0e4bef5856550f911357c41b72d0baefa591b3714839067 docker.io/bitnami/kubectl:latest],SizeBytes:48898314,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:latest docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:26:46.109: INFO: Logging kubelet events for node latest-worker Mar 21 23:26:46.142: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 21 23:26:46.252: INFO: failure-3 started at 2021-03-21 23:25:15 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container failure-3 ready: true, restart count 1 Mar 21 23:26:46.252: INFO: chaos-daemon-qkndt started at 2021-03-21 18:05:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:26:46.252: INFO: externalip-test-2g8rm started at 2021-03-21 23:24:44 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container externalip-test ready: true, restart count 0 Mar 21 23:26:46.252: INFO: kindnet-sbskd started at 2021-03-21 18:05:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:26:46.252: INFO: ss2-1 started at 2021-03-21 23:25:46 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container webserver ready: true, restart count 0 Mar 21 23:26:46.252: INFO: inclusterclient started at 2021-03-21 23:22:57 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container inclusterclient ready: true, restart count 0 Mar 21 23:26:46.252: INFO: ss2-0 started at 2021-03-21 23:26:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container webserver ready: false, restart count 0 Mar 21 23:26:46.252: INFO: externalip-test-ls4kf started at 2021-03-21 23:24:44 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container externalip-test ready: true, restart count 0 Mar 21 23:26:46.252: INFO: kube-proxy-5wvjm started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:26:46.252: INFO: execpod7sbxc started at 2021-03-21 23:24:50 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container agnhost-container ready: true, restart count 0 Mar 21 23:26:46.252: INFO: success started at 2021-03-21 23:23:53 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container success ready: false, restart count 0 Mar 21 23:26:46.252: INFO: failure-1 started at 2021-03-21 23:23:59 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.252: INFO: Container failure-1 ready: false, restart count 0 W0321 23:26:46.264514 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:26:46.642: INFO: Latency metrics for node latest-worker Mar 21 23:26:46.642: INFO: Logging node info for node latest-worker2 Mar 21 23:26:46.651: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 7d2a1377-0c6f-45fb-899e-6c307ecb1803 6920639 0 2021-02-19 10:12:05 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1005":"csi-mock-csi-mock-volumes-1005","csi-mock-csi-mock-volumes-1125":"csi-mock-csi-mock-volumes-1125","csi-mock-csi-mock-volumes-1158":"csi-mock-csi-mock-volumes-1158","csi-mock-csi-mock-volumes-1167":"csi-mock-csi-mock-volumes-1167","csi-mock-csi-mock-volumes-117":"csi-mock-csi-mock-volumes-117","csi-mock-csi-mock-volumes-123":"csi-mock-csi-mock-volumes-123","csi-mock-csi-mock-volumes-1248":"csi-mock-csi-mock-volumes-1248","csi-mock-csi-mock-volumes-132":"csi-mock-csi-mock-volumes-132","csi-mock-csi-mock-volumes-1462":"csi-mock-csi-mock-volumes-1462","csi-mock-csi-mock-volumes-1508":"csi-mock-csi-mock-volumes-1508","csi-mock-csi-mock-volumes-1620":"csi-mock-csi-mock-volumes-1620","csi-mock-csi-mock-volumes-1694":"csi-mock-csi-mock-volumes-1694","csi-mock-csi-mock-volumes-1823":"csi-mock-csi-mock-volumes-1823","csi-mock-csi-mock-volumes-1829":"csi-mock-csi-mock-volumes-1829","csi-mock-csi-mock-volumes-1852":"csi-mock-csi-mock-volumes-1852","csi-mock-csi-mock-volumes-1856":"csi-mock-csi-mock-volumes-1856","csi-mock-csi-mock-volumes-194":"csi-mock-csi-mock-volumes-194","csi-mock-csi-mock-volumes-1943":"csi-mock-csi-mock-volumes-1943","csi-mock-csi-mock-volumes-2103":"csi-mock-csi-mock-volumes-2103","csi-mock-csi-mock-volumes-2109":"csi-mock-csi-mock-volumes-2109","csi-mock-csi-mock-volumes-2134":"csi-mock-csi-mock-volumes-2134","csi-mock-csi-mock-volumes-2212":"csi-mock-csi-mock-volumes-2212","csi-mock-csi-mock-volumes-2302":"csi-mock-csi-mock-volumes-2302","csi-mock-csi-mock-volumes-2308":"csi-mock-csi-mock-volumes-2308","csi-mock-csi-mock-volumes-2407":"csi-mock-csi-mock-volumes-2407","csi-mock-csi-mock-volumes-2458":"csi-mock-csi-mock-volumes-2458","csi-mock-csi-mock-volumes-2474":"csi-mock-csi-mock-volumes-2474","csi-mock-csi-mock-volumes-254":"csi-mock-csi-mock-volumes-254","csi-mock-csi-mock-volumes-2575":"csi-mock-csi-mock-volumes-2575","csi-mock-csi-mock-volumes-2622":"csi-mock-csi-mock-volumes-2622","csi-mock-csi-mock-volumes-2651":"csi-mock-csi-mock-volumes-2651","csi-mock-csi-mock-volumes-2668":"csi-mock-csi-mock-volumes-2668","csi-mock-csi-mock-volumes-2781":"csi-mock-csi-mock-volumes-2781","csi-mock-csi-mock-volumes-2791":"csi-mock-csi-mock-volumes-2791","csi-mock-csi-mock-volumes-2823":"csi-mock-csi-mock-volumes-2823","csi-mock-csi-mock-volumes-2847":"csi-mock-csi-mock-volumes-2847","csi-mock-csi-mock-volumes-295":"csi-mock-csi-mock-volumes-295","csi-mock-csi-mock-volumes-3129":"csi-mock-csi-mock-volumes-3129","csi-mock-csi-mock-volumes-3190":"csi-mock-csi-mock-volumes-3190","csi-mock-csi-mock-volumes-3428":"csi-mock-csi-mock-volumes-3428","csi-mock-csi-mock-volumes-3490":"csi-mock-csi-mock-volumes-3490","csi-mock-csi-mock-volumes-3491":"csi-mock-csi-mock-volumes-3491","csi-mock-csi-mock-volumes-3532":"csi-mock-csi-mock-volumes-3532","csi-mock-csi-mock-volumes-3609":"csi-mock-csi-mock-volumes-3609","csi-mock-csi-mock-volumes-368":"csi-mock-csi-mock-volumes-368","csi-mock-csi-mock-volumes-3698":"csi-mock-csi-mock-volumes-3698","csi-mock-csi-mock-volumes-3714":"csi-mock-csi-mock-volumes-3714","csi-mock-csi-mock-volumes-3720":"csi-mock-csi-mock-volumes-3720","csi-mock-csi-mock-volumes-3722":"csi-mock-csi-mock-volumes-3722","csi-mock-csi-mock-volumes-3723":"csi-mock-csi-mock-volumes-3723","csi-mock-csi-mock-volumes-3783":"csi-mock-csi-mock-volumes-3783","csi-mock-csi-mock-volumes-3887":"csi-mock-csi-mock-volumes-3887","csi-mock-csi-mock-volumes-3973":"csi-mock-csi-mock-volumes-3973","csi-mock-csi-mock-volumes-4074":"csi-mock-csi-mock-volumes-4074","csi-mock-csi-mock-volumes-4164":"csi-mock-csi-mock-volumes-4164","csi-mock-csi-mock-volumes-4181":"csi-mock-csi-mock-volumes-4181","csi-mock-csi-mock-volumes-4442":"csi-mock-csi-mock-volumes-4442","csi-mock-csi-mock-volumes-4483":"csi-mock-csi-mock-volumes-4483","csi-mock-csi-mock-volumes-4549":"csi-mock-csi-mock-volumes-4549","csi-mock-csi-mock-volumes-4707":"csi-mock-csi-mock-volumes-4707","csi-mock-csi-mock-volumes-4906":"csi-mock-csi-mock-volumes-4906","csi-mock-csi-mock-volumes-4977":"csi-mock-csi-mock-volumes-4977","csi-mock-csi-mock-volumes-5116":"csi-mock-csi-mock-volumes-5116","csi-mock-csi-mock-volumes-5166":"csi-mock-csi-mock-volumes-5166","csi-mock-csi-mock-volumes-5233":"csi-mock-csi-mock-volumes-5233","csi-mock-csi-mock-volumes-5344":"csi-mock-csi-mock-volumes-5344","csi-mock-csi-mock-volumes-5406":"csi-mock-csi-mock-volumes-5406","csi-mock-csi-mock-volumes-5433":"csi-mock-csi-mock-volumes-5433","csi-mock-csi-mock-volumes-5483":"csi-mock-csi-mock-volumes-5483","csi-mock-csi-mock-volumes-5486":"csi-mock-csi-mock-volumes-5486","csi-mock-csi-mock-volumes-5520":"csi-mock-csi-mock-volumes-5520","csi-mock-csi-mock-volumes-5540":"csi-mock-csi-mock-volumes-5540","csi-mock-csi-mock-volumes-5592":"csi-mock-csi-mock-volumes-5592","csi-mock-csi-mock-volumes-5741":"csi-mock-csi-mock-volumes-5741","csi-mock-csi-mock-volumes-5753":"csi-mock-csi-mock-volumes-5753","csi-mock-csi-mock-volumes-5790":"csi-mock-csi-mock-volumes-5790","csi-mock-csi-mock-volumes-5820":"csi-mock-csi-mock-volumes-5820","csi-mock-csi-mock-volumes-5830":"csi-mock-csi-mock-volumes-5830","csi-mock-csi-mock-volumes-5880":"csi-mock-csi-mock-volumes-5880","csi-mock-csi-mock-volumes-5886":"csi-mock-csi-mock-volumes-5886","csi-mock-csi-mock-volumes-5899":"csi-mock-csi-mock-volumes-5899","csi-mock-csi-mock-volumes-5925":"csi-mock-csi-mock-volumes-5925","csi-mock-csi-mock-volumes-5928":"csi-mock-csi-mock-volumes-5928","csi-mock-csi-mock-volumes-6098":"csi-mock-csi-mock-volumes-6098","csi-mock-csi-mock-volumes-6154":"csi-mock-csi-mock-volumes-6154","csi-mock-csi-mock-volumes-6193":"csi-mock-csi-mock-volumes-6193","csi-mock-csi-mock-volumes-6237":"csi-mock-csi-mock-volumes-6237","csi-mock-csi-mock-volumes-6393":"csi-mock-csi-mock-volumes-6393","csi-mock-csi-mock-volumes-6394":"csi-mock-csi-mock-volumes-6394","csi-mock-csi-mock-volumes-6468":"csi-mock-csi-mock-volumes-6468","csi-mock-csi-mock-volumes-6508":"csi-mock-csi-mock-volumes-6508","csi-mock-csi-mock-volumes-6516":"csi-mock-csi-mock-volumes-6516","csi-mock-csi-mock-volumes-6520":"csi-mock-csi-mock-volumes-6520","csi-mock-csi-mock-volumes-6574":"csi-mock-csi-mock-volumes-6574","csi-mock-csi-mock-volumes-6663":"csi-mock-csi-mock-volumes-6663","csi-mock-csi-mock-volumes-6715":"csi-mock-csi-mock-volumes-6715","csi-mock-csi-mock-volumes-6754":"csi-mock-csi-mock-volumes-6754","csi-mock-csi-mock-volumes-6804":"csi-mock-csi-mock-volumes-6804","csi-mock-csi-mock-volumes-6918":"csi-mock-csi-mock-volumes-6918","csi-mock-csi-mock-volumes-6925":"csi-mock-csi-mock-volumes-6925","csi-mock-csi-mock-volumes-7092":"csi-mock-csi-mock-volumes-7092","csi-mock-csi-mock-volumes-7139":"csi-mock-csi-mock-volumes-7139","csi-mock-csi-mock-volumes-7270":"csi-mock-csi-mock-volumes-7270","csi-mock-csi-mock-volumes-7273":"csi-mock-csi-mock-volumes-7273","csi-mock-csi-mock-volumes-7442":"csi-mock-csi-mock-volumes-7442","csi-mock-csi-mock-volumes-7448":"csi-mock-csi-mock-volumes-7448","csi-mock-csi-mock-volumes-7543":"csi-mock-csi-mock-volumes-7543","csi-mock-csi-mock-volumes-7597":"csi-mock-csi-mock-volumes-7597","csi-mock-csi-mock-volumes-7608":"csi-mock-csi-mock-volumes-7608","csi-mock-csi-mock-volumes-7642":"csi-mock-csi-mock-volumes-7642","csi-mock-csi-mock-volumes-7659":"csi-mock-csi-mock-volumes-7659","csi-mock-csi-mock-volumes-7725":"csi-mock-csi-mock-volumes-7725","csi-mock-csi-mock-volumes-7760":"csi-mock-csi-mock-volumes-7760","csi-mock-csi-mock-volumes-778":"csi-mock-csi-mock-volumes-778","csi-mock-csi-mock-volumes-7811":"csi-mock-csi-mock-volumes-7811","csi-mock-csi-mock-volumes-7819":"csi-mock-csi-mock-volumes-7819","csi-mock-csi-mock-volumes-791":"csi-mock-csi-mock-volumes-791","csi-mock-csi-mock-volumes-7929":"csi-mock-csi-mock-volumes-7929","csi-mock-csi-mock-volumes-7930":"csi-mock-csi-mock-volumes-7930","csi-mock-csi-mock-volumes-7933":"csi-mock-csi-mock-volumes-7933","csi-mock-csi-mock-volumes-7950":"csi-mock-csi-mock-volumes-7950","csi-mock-csi-mock-volumes-8005":"csi-mock-csi-mock-volumes-8005","csi-mock-csi-mock-volumes-8070":"csi-mock-csi-mock-volumes-8070","csi-mock-csi-mock-volumes-8123":"csi-mock-csi-mock-volumes-8123","csi-mock-csi-mock-volumes-8132":"csi-mock-csi-mock-volumes-8132","csi-mock-csi-mock-volumes-8134":"csi-mock-csi-mock-volumes-8134","csi-mock-csi-mock-volumes-8381":"csi-mock-csi-mock-volumes-8381","csi-mock-csi-mock-volumes-8391":"csi-mock-csi-mock-volumes-8391","csi-mock-csi-mock-volumes-8409":"csi-mock-csi-mock-volumes-8409","csi-mock-csi-mock-volumes-8420":"csi-mock-csi-mock-volumes-8420","csi-mock-csi-mock-volumes-8467":"csi-mock-csi-mock-volumes-8467","csi-mock-csi-mock-volumes-8581":"csi-mock-csi-mock-volumes-8581","csi-mock-csi-mock-volumes-8619":"csi-mock-csi-mock-volumes-8619","csi-mock-csi-mock-volumes-8708":"csi-mock-csi-mock-volumes-8708","csi-mock-csi-mock-volumes-8766":"csi-mock-csi-mock-volumes-8766","csi-mock-csi-mock-volumes-8789":"csi-mock-csi-mock-volumes-8789","csi-mock-csi-mock-volumes-8800":"csi-mock-csi-mock-volumes-8800","csi-mock-csi-mock-volumes-8819":"csi-mock-csi-mock-volumes-8819","csi-mock-csi-mock-volumes-8830":"csi-mock-csi-mock-volumes-8830","csi-mock-csi-mock-volumes-9034":"csi-mock-csi-mock-volumes-9034","csi-mock-csi-mock-volumes-9051":"csi-mock-csi-mock-volumes-9051","csi-mock-csi-mock-volumes-9052":"csi-mock-csi-mock-volumes-9052","csi-mock-csi-mock-volumes-9211":"csi-mock-csi-mock-volumes-9211","csi-mock-csi-mock-volumes-9234":"csi-mock-csi-mock-volumes-9234","csi-mock-csi-mock-volumes-9238":"csi-mock-csi-mock-volumes-9238","csi-mock-csi-mock-volumes-9316":"csi-mock-csi-mock-volumes-9316","csi-mock-csi-mock-volumes-9344":"csi-mock-csi-mock-volumes-9344","csi-mock-csi-mock-volumes-9386":"csi-mock-csi-mock-volumes-9386","csi-mock-csi-mock-volumes-941":"csi-mock-csi-mock-volumes-941","csi-mock-csi-mock-volumes-9416":"csi-mock-csi-mock-volumes-9416","csi-mock-csi-mock-volumes-9426":"csi-mock-csi-mock-volumes-9426","csi-mock-csi-mock-volumes-9429":"csi-mock-csi-mock-volumes-9429","csi-mock-csi-mock-volumes-9438":"csi-mock-csi-mock-volumes-9438","csi-mock-csi-mock-volumes-9439":"csi-mock-csi-mock-volumes-9439","csi-mock-csi-mock-volumes-9665":"csi-mock-csi-mock-volumes-9665","csi-mock-csi-mock-volumes-9772":"csi-mock-csi-mock-volumes-9772","csi-mock-csi-mock-volumes-9793":"csi-mock-csi-mock-volumes-9793","csi-mock-csi-mock-volumes-9921":"csi-mock-csi-mock-volumes-9921","csi-mock-csi-mock-volumes-9953":"csi-mock-csi-mock-volumes-9953"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-02-19 10:12:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-21 18:46:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-21 23:23:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-21 23:23:08 +0000 UTC,LastTransitionTime:2021-02-19 10:12:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.13,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c99819c175ab4bf18f91789643e4cec7,SystemUUID:67d3f1bb-f61c-4599-98ac-5879bee4ddde,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:7460a5e29c91e16bd36534b5284b27832e1c6e0f7c22a4bb79eda79942d250e1 docker.io/ollivier/clearwater-cassandra:hunter],SizeBytes:386500834,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:ef3803b4336ea217f9a467dac7f5c4bc3d2cd0f3cbbcc6c101419506fc3a6fa4 docker.io/ollivier/clearwater-homestead-prov:hunter],SizeBytes:360721934,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:abcaf4a9ec74aa746f3b3949f05ee359349fce2b0b3b2deedf67164719cff5dc docker.io/ollivier/clearwater-ellis:hunter],SizeBytes:351519591,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:f8738d756137990120f6aafca7e0bc57186ab591a4b51508ba9ddf30688f4de1 docker.io/ollivier/clearwater-homer:hunter],SizeBytes:344304298,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:5bc730435563dc3e31f3a3b7bb58b5899c453929b624052622b3a9a1c2fd04d8 docker.io/ollivier/clearwater-astaire:hunter],SizeBytes:327310970,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:0da41ddbf72c158f8e617015450587634cdf6856696005601b2108df92a27254 docker.io/ollivier/clearwater-bono:hunter],SizeBytes:303708624,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:fd91f90bdfbfac30288ca296962e36537f8fa311d34711904e027098227d9f49 docker.io/ollivier/clearwater-sprout:hunter],SizeBytes:298627136,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:8467d6c9caefa6a402eba927ee31f8179b2dc3386ac4768dd04f4bd630a1b9e9 docker.io/ollivier/clearwater-homestead:hunter],SizeBytes:295167572,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:7054a741dbd4ea45fa4efd8138c07e21d35fbed627971559de5df2d03883e94f docker.io/ollivier/clearwater-ralf:hunter],SizeBytes:287441316,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:1d5f9b5385db632e3076463fec5a4db5d4a15ef21ac5a112cb42fb0551ffa36d docker.io/ollivier/clearwater-chronos:hunter],SizeBytes:285504787,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:1ec1c46f531b5466e62e2a3f1e65e97336e8ea0ca50f11a4305775d6aeac9a58 docker.io/ollivier/clearwater-live-test:hunter],SizeBytes:39060692,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[docker.io/pingcap/chaos-daemon@sha256:2138de599ca94104101fcd8f3bd04401f8c88f2bc721d0f77d55a81b85f4d92f docker.io/pingcap/chaos-daemon:v0.8.0],SizeBytes:21885344,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/pingcap/chaos-mesh@sha256:b90c04665e0275602e52df37d4b6fdd4621df7d4e4c823fc63356559f67dea72 docker.io/pingcap/chaos-mesh:v0.8.0],SizeBytes:13863887,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e docker.io/coredns/coredns:1.8.0],SizeBytes:12945155,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:4a6e0769130686518325b21b0c1d0688b54e7c79244d48e1b15634e98e40c6ef docker.io/coredns/coredns:1.7.1],SizeBytes:12935701,},ContainerImage{Names:[docker.io/coredns/coredns@sha256:642ff9910da6ea9a8624b0234eef52af9ca75ecbec474c5507cb096bdfbae4e5 docker.io/coredns/coredns:latest],SizeBytes:12893350,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 21 23:26:46.653: INFO: Logging kubelet events for node latest-worker2 Mar 21 23:26:46.830: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 21 23:26:46.887: INFO: chaos-daemon-gfm87 started at 2021-03-21 17:24:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container chaos-daemon ready: true, restart count 0 Mar 21 23:26:46.887: INFO: chaos-controller-manager-69c479c674-hcpp6 started at 2021-03-21 18:05:18 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container chaos-mesh ready: true, restart count 0 Mar 21 23:26:46.887: INFO: startup-f482ddf2-874b-415e-8f53-c029bbc2dcd1 started at 2021-03-21 23:25:51 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container busybox ready: true, restart count 0 Mar 21 23:26:46.887: INFO: failure-2 started at 2021-03-21 23:24:07 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container failure-2 ready: true, restart count 1 Mar 21 23:26:46.887: INFO: kindnet-lhbxs started at 2021-03-21 17:24:47 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container kindnet-cni ready: true, restart count 0 Mar 21 23:26:46.887: INFO: coredns-74ff55c5b-kcjgk started at 2021-03-21 23:24:39 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container coredns ready: true, restart count 0 Mar 21 23:26:46.887: INFO: httpd started at 2021-03-21 23:23:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container httpd ready: true, restart count 0 Mar 21 23:26:46.887: INFO: kube-proxy-7q92q started at 2021-02-19 10:12:05 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container kube-proxy ready: true, restart count 0 Mar 21 23:26:46.887: INFO: ss2-2 started at 2021-03-21 23:24:35 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container webserver ready: true, restart count 0 Mar 21 23:26:46.887: INFO: rally-ae1e1e5d-vg2fpj0j-hjbtf started at 2021-03-21 23:26:45 +0000 UTC (0+1 container statuses recorded) Mar 21 23:26:46.887: INFO: Container rally-ae1e1e5d-vg2fpj0j ready: false, restart count 0 W0321 23:26:46.980576 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 21 23:26:47.517: INFO: Latency metrics for node latest-worker2 Mar 21 23:26:47.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2125" for this suite. • Failure [57.811 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 Mar 21 23:26:45.350: Pod became ready in 30.064010401s, more than 5s after startupProbe succeeded. It means that the delay readiness probes were not initiated immediately after startup finished. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":58,"completed":7,"skipped":663,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:26:48.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Mar 21 23:26:58.675: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-3662" to be "Succeeded or Failed" Mar 21 23:26:59.027: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 351.321468ms Mar 21 23:27:01.268: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.592818325s Mar 21 23:27:03.327: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.651257088s Mar 21 23:27:05.386: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.710846067s Mar 21 23:27:05.386: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:05.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3662" for this suite. • [SLOW TEST:16.904 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":58,"completed":8,"skipped":676,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSS ------------------------------ [sig-node] crictl should be able to run crictl on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:05.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Mar 21 23:27:07.600: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:07.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-2665" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.230 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:07.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:15.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6530" for this suite. • [SLOW TEST:9.608 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":58,"completed":9,"skipped":751,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:17.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:21.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1660" for this suite. •{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":58,"completed":10,"skipped":881,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:21.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 21 23:27:21.795: INFO: Waiting up to 5m0s for pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5" in namespace "security-context-8345" to be "Succeeded or Failed" Mar 21 23:27:21.843: INFO: Pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5": Phase="Pending", Reason="", readiness=false. Elapsed: 47.684624ms Mar 21 23:27:23.858: INFO: Pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063112845s Mar 21 23:27:26.028: INFO: Pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.232194252s Mar 21 23:27:28.205: INFO: Pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5": Phase="Running", Reason="", readiness=true. Elapsed: 6.409427938s Mar 21 23:27:30.337: INFO: Pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.541759105s STEP: Saw pod success Mar 21 23:27:30.337: INFO: Pod "security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5" satisfied condition "Succeeded or Failed" Mar 21 23:27:30.392: INFO: Trying to get logs from node latest-worker2 pod security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5 container test-container: STEP: delete the pod Mar 21 23:27:32.369: INFO: Waiting for pod security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5 to disappear Mar 21 23:27:32.446: INFO: Pod security-context-a2f8d171-257b-4e38-adf9-03381b4dfda5 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:32.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-8345" for this suite. • [SLOW TEST:11.718 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":58,"completed":11,"skipped":897,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:33.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Mar 21 23:27:35.447: INFO: Waiting up to 5m0s for pod "security-context-64c8a332-1b6d-45cd-9aad-641264ed6762" in namespace "security-context-7970" to be "Succeeded or Failed" Mar 21 23:27:35.818: INFO: Pod "security-context-64c8a332-1b6d-45cd-9aad-641264ed6762": Phase="Pending", Reason="", readiness=false. Elapsed: 371.667896ms Mar 21 23:27:38.323: INFO: Pod "security-context-64c8a332-1b6d-45cd-9aad-641264ed6762": Phase="Pending", Reason="", readiness=false. Elapsed: 2.876293177s Mar 21 23:27:40.735: INFO: Pod "security-context-64c8a332-1b6d-45cd-9aad-641264ed6762": Phase="Pending", Reason="", readiness=false. Elapsed: 5.288423144s Mar 21 23:27:42.791: INFO: Pod "security-context-64c8a332-1b6d-45cd-9aad-641264ed6762": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.344370417s STEP: Saw pod success Mar 21 23:27:42.791: INFO: Pod "security-context-64c8a332-1b6d-45cd-9aad-641264ed6762" satisfied condition "Succeeded or Failed" Mar 21 23:27:43.167: INFO: Trying to get logs from node latest-worker pod security-context-64c8a332-1b6d-45cd-9aad-641264ed6762 container test-container: STEP: delete the pod Mar 21 23:27:44.448: INFO: Waiting for pod security-context-64c8a332-1b6d-45cd-9aad-641264ed6762 to disappear Mar 21 23:27:44.645: INFO: Pod security-context-64c8a332-1b6d-45cd-9aad-641264ed6762 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:44.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7970" for this suite. • [SLOW TEST:12.285 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":58,"completed":12,"skipped":923,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:45.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:47.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-3493" for this suite. •{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":58,"completed":13,"skipped":937,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:47.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Mar 21 23:27:49.106: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563" in namespace "security-context-test-2281" to be "Succeeded or Failed" Mar 21 23:27:49.227: INFO: Pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563": Phase="Pending", Reason="", readiness=false. Elapsed: 121.209562ms Mar 21 23:27:51.233: INFO: Pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126951716s Mar 21 23:27:53.291: INFO: Pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185306614s Mar 21 23:27:55.443: INFO: Pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.336952209s Mar 21 23:27:55.443: INFO: Pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563" satisfied condition "Succeeded or Failed" Mar 21 23:27:55.492: INFO: Got logs for pod "busybox-privileged-true-2b129c89-886e-404f-b569-a0dab586a563": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:27:55.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2281" for this suite. • [SLOW TEST:7.864 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":58,"completed":14,"skipped":1080,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:27:55.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 21 23:27:56.056: INFO: Waiting up to 5m0s for pod "security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517" in namespace "security-context-1225" to be "Succeeded or Failed" Mar 21 23:27:56.059: INFO: Pod "security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517": Phase="Pending", Reason="", readiness=false. Elapsed: 3.632265ms Mar 21 23:27:58.125: INFO: Pod "security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069118946s Mar 21 23:28:00.407: INFO: Pod "security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351496867s Mar 21 23:28:02.425: INFO: Pod "security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.369087113s STEP: Saw pod success Mar 21 23:28:02.425: INFO: Pod "security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517" satisfied condition "Succeeded or Failed" Mar 21 23:28:02.801: INFO: Trying to get logs from node latest-worker2 pod security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517 container test-container: STEP: delete the pod Mar 21 23:28:03.677: INFO: Waiting for pod security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517 to disappear Mar 21 23:28:03.806: INFO: Pod security-context-af6cee2b-d0e9-4d9d-a2aa-bf84eda01517 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:03.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-1225" for this suite. • [SLOW TEST:8.259 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":58,"completed":15,"skipped":1088,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:03.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-7755/configmap-test-5635037d-0dd6-4515-82cd-3c00e9484e32 STEP: Updating configMap configmap-7755/configmap-test-5635037d-0dd6-4515-82cd-3c00e9484e32 STEP: Verifying update of ConfigMap configmap-7755/configmap-test-5635037d-0dd6-4515-82cd-3c00e9484e32 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:04.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7755" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":58,"completed":16,"skipped":1099,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Mount propagation should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:05.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Mar 21 23:28:05.675: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:07.746: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:09.748: INFO: The status of Pod master is Running (Ready = true) Mar 21 23:28:09.868: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:12.247: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:13.885: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:15.902: INFO: The status of Pod slave is Running (Ready = true) Mar 21 23:28:16.137: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:18.262: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:20.178: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:22.173: INFO: The status of Pod private is Running (Ready = true) Mar 21 23:28:22.292: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:24.506: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:28:26.374: INFO: The status of Pod default is Running (Ready = true) Mar 21 23:28:26.413: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:26.413: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:26.646: INFO: Exec stderr: "" Mar 21 23:28:26.695: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:26.695: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:26.876: INFO: Exec stderr: "" Mar 21 23:28:26.937: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:26.937: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:27.059: INFO: Exec stderr: "" Mar 21 23:28:27.093: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:27.093: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:27.212: INFO: Exec stderr: "" Mar 21 23:28:27.245: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:27.245: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:27.384: INFO: Exec stderr: "" Mar 21 23:28:27.431: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:27.431: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:27.592: INFO: Exec stderr: "" Mar 21 23:28:27.648: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:27.648: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:27.841: INFO: Exec stderr: "" Mar 21 23:28:27.857: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:27.857: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:28.071: INFO: Exec stderr: "" Mar 21 23:28:28.074: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:28.074: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:28.212: INFO: Exec stderr: "" Mar 21 23:28:28.265: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:28.265: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:28.490: INFO: Exec stderr: "" Mar 21 23:28:28.551: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:28.551: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:28.782: INFO: Exec stderr: "" Mar 21 23:28:28.839: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:28.839: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:29.003: INFO: Exec stderr: "" Mar 21 23:28:29.018: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:29.018: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:29.151: INFO: Exec stderr: "" Mar 21 23:28:29.168: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:29.168: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:29.337: INFO: Exec stderr: "" Mar 21 23:28:29.396: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:29.396: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:29.568: INFO: Exec stderr: "" Mar 21 23:28:29.640: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:29.640: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:29.818: INFO: Exec stderr: "" Mar 21 23:28:29.834: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:29.834: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:30.004: INFO: Exec stderr: "" Mar 21 23:28:30.031: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:30.031: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:30.184: INFO: Exec stderr: "" Mar 21 23:28:30.223: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:30.223: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:30.470: INFO: Exec stderr: "" Mar 21 23:28:30.523: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:30.523: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:30.723: INFO: Exec stderr: "" Mar 21 23:28:34.862: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-6982"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-6982"/host; echo host > "/var/lib/kubelet/mount-propagation-6982"/host/file] Namespace:mount-propagation-6982 PodName:hostexec-latest-worker2-5j26w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:28:34.862: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:35.005: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:35.005: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:35.178: INFO: pod master mount master: stdout: "master", stderr: "" error: Mar 21 23:28:35.207: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:35.207: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:35.343: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:35.362: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:35.362: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:35.525: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:35.623: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:35.623: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:35.811: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:35.847: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:35.847: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:36.135: INFO: pod master mount host: stdout: "host", stderr: "" error: Mar 21 23:28:36.215: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:36.215: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:36.521: INFO: pod slave mount master: stdout: "master", stderr: "" error: Mar 21 23:28:36.524: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:36.525: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:36.669: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Mar 21 23:28:36.680: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:36.680: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:36.955: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:37.015: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:37.015: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:37.214: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:37.280: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:37.280: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:37.483: INFO: pod slave mount host: stdout: "host", stderr: "" error: Mar 21 23:28:37.488: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:37.488: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:37.666: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:37.676: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:37.676: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:37.852: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:37.872: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:37.872: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:38.028: INFO: pod private mount private: stdout: "private", stderr: "" error: Mar 21 23:28:38.034: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:38.034: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:38.183: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:38.244: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:38.244: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:38.370: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:38.459: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:38.459: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:38.621: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:38.698: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:38.698: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:38.854: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:38.987: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:38.987: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:39.239: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:39.327: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:39.327: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:39.513: INFO: pod default mount default: stdout: "default", stderr: "" error: Mar 21 23:28:39.617: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:39.617: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:39.796: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Mar 21 23:28:39.796: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-6982"/master/file` = master] Namespace:mount-propagation-6982 PodName:hostexec-latest-worker2-5j26w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:28:39.796: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:40.026: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-6982"/slave/file] Namespace:mount-propagation-6982 PodName:hostexec-latest-worker2-5j26w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:28:40.027: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:40.156: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-6982"/host] Namespace:mount-propagation-6982 PodName:hostexec-latest-worker2-5j26w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:28:40.157: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:40.497: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-6982 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:40.498: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:40.638: INFO: Exec stderr: "" Mar 21 23:28:40.677: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-6982 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:40.677: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:40.847: INFO: Exec stderr: "" Mar 21 23:28:40.898: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-6982 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:40.898: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:41.196: INFO: Exec stderr: "" Mar 21 23:28:41.250: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-6982 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 21 23:28:41.250: INFO: >>> kubeConfig: /root/.kube/config Mar 21 23:28:41.401: INFO: Exec stderr: "" Mar 21 23:28:41.401: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-6982"] Namespace:mount-propagation-6982 PodName:hostexec-latest-worker2-5j26w ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 21 23:28:41.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-latest-worker2-5j26w in namespace mount-propagation-6982 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:41.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-6982" for this suite. • [SLOW TEST:36.514 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":58,"completed":17,"skipped":1290,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:41.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 21 23:28:43.944: INFO: Waiting up to 5m0s for pod "security-context-668d3429-a71c-4805-bab9-676768878784" in namespace "security-context-3273" to be "Succeeded or Failed" Mar 21 23:28:43.966: INFO: Pod "security-context-668d3429-a71c-4805-bab9-676768878784": Phase="Pending", Reason="", readiness=false. Elapsed: 21.095202ms Mar 21 23:28:46.150: INFO: Pod "security-context-668d3429-a71c-4805-bab9-676768878784": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205817958s Mar 21 23:28:48.348: INFO: Pod "security-context-668d3429-a71c-4805-bab9-676768878784": Phase="Running", Reason="", readiness=true. Elapsed: 4.403310214s Mar 21 23:28:50.389: INFO: Pod "security-context-668d3429-a71c-4805-bab9-676768878784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.44496855s STEP: Saw pod success Mar 21 23:28:50.389: INFO: Pod "security-context-668d3429-a71c-4805-bab9-676768878784" satisfied condition "Succeeded or Failed" Mar 21 23:28:50.751: INFO: Trying to get logs from node latest-worker2 pod security-context-668d3429-a71c-4805-bab9-676768878784 container test-container: STEP: delete the pod Mar 21 23:28:51.238: INFO: Waiting for pod security-context-668d3429-a71c-4805-bab9-676768878784 to disappear Mar 21 23:28:51.308: INFO: Pod security-context-668d3429-a71c-4805-bab9-676768878784 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:28:51.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-3273" for this suite. • [SLOW TEST:9.633 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":58,"completed":18,"skipped":1525,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:28:51.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Mar 21 23:28:51.868: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:29:51.885: INFO: Waiting for terminating namespaces to be deleted... [It] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 Mar 21 23:29:51.897: INFO: Starting informer... STEP: Starting pod... Mar 21 23:29:52.139: INFO: Pod is running on latest-worker2. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting to see if a Pod won't be deleted Mar 21 23:30:57.820: INFO: Pod wasn't evicted STEP: Waiting for Pod to be deleted Mar 21 23:31:45.396: INFO: Pod was evicted after toleration time run out. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:31:45.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-877" for this suite. • [SLOW TEST:174.269 seconds] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","total":58,"completed":19,"skipped":1531,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:148 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:31:45.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:31:46.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-4103" for this suite. •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":58,"completed":20,"skipped":1605,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:31:46.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:32:13.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5572" for this suite. • [SLOW TEST:28.477 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":58,"completed":21,"skipped":1834,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:15.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:32:25.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6326" for this suite. • [SLOW TEST:11.211 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":58,"completed":22,"skipped":1948,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:32:26.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Mar 21 23:32:26.984: INFO: Waiting up to 5m0s for node latest-worker2 condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Mar 21 23:32:28.203: INFO: node status heartbeat is unchanged for 1.089927204s, waiting for 1m20s Mar 21 23:32:29.136: INFO: node status heartbeat is unchanged for 2.022664348s, waiting for 1m20s Mar 21 23:32:30.253: INFO: node status heartbeat is unchanged for 3.140062071s, waiting for 1m20s Mar 21 23:32:31.219: INFO: node status heartbeat is unchanged for 4.105818487s, waiting for 1m20s Mar 21 23:32:32.251: INFO: node status heartbeat is unchanged for 5.138056168s, waiting for 1m20s Mar 21 23:32:33.183: INFO: node status heartbeat is unchanged for 6.069579432s, waiting for 1m20s Mar 21 23:32:34.250: INFO: node status heartbeat is unchanged for 7.136341687s, waiting for 1m20s Mar 21 23:32:35.121: INFO: node status heartbeat is unchanged for 8.007687905s, waiting for 1m20s Mar 21 23:32:36.149: INFO: node status heartbeat is unchanged for 9.03549956s, waiting for 1m20s Mar 21 23:32:37.161: INFO: node status heartbeat is unchanged for 10.048256996s, waiting for 1m20s Mar 21 23:32:38.298: INFO: node status heartbeat is unchanged for 11.184567698s, waiting for 1m20s Mar 21 23:32:39.220: INFO: node status heartbeat is unchanged for 12.106770864s, waiting for 1m20s Mar 21 23:32:40.176: INFO: node status heartbeat is unchanged for 13.06239624s, waiting for 1m20s Mar 21 23:32:41.247: INFO: node status heartbeat is unchanged for 14.133669883s, waiting for 1m20s Mar 21 23:32:42.149: INFO: node status heartbeat is unchanged for 15.035762899s, waiting for 1m20s Mar 21 23:32:43.155: INFO: node status heartbeat is unchanged for 16.041632801s, waiting for 1m20s Mar 21 23:32:44.249: INFO: node status heartbeat is unchanged for 17.135845004s, waiting for 1m20s Mar 21 23:32:45.323: INFO: node status heartbeat is unchanged for 18.209893303s, waiting for 1m20s Mar 21 23:32:46.153: INFO: node status heartbeat is unchanged for 19.039348333s, waiting for 1m20s Mar 21 23:32:48.267: INFO: node status heartbeat is unchanged for 21.154089477s, waiting for 1m20s Mar 21 23:32:49.399: INFO: node status heartbeat is unchanged for 22.285544255s, waiting for 1m20s Mar 21 23:32:50.579: INFO: node status heartbeat is unchanged for 23.465533708s, waiting for 1m20s Mar 21 23:32:51.176: INFO: node status heartbeat is unchanged for 24.063129763s, waiting for 1m20s Mar 21 23:32:52.154: INFO: node status heartbeat is unchanged for 25.041016164s, waiting for 1m20s Mar 21 23:32:53.491: INFO: node status heartbeat is unchanged for 26.377490005s, waiting for 1m20s Mar 21 23:32:54.257: INFO: node status heartbeat is unchanged for 27.143626419s, waiting for 1m20s Mar 21 23:32:55.145: INFO: node status heartbeat is unchanged for 28.031811958s, waiting for 1m20s Mar 21 23:32:56.121: INFO: node status heartbeat is unchanged for 29.008323253s, waiting for 1m20s Mar 21 23:32:57.233: INFO: node status heartbeat is unchanged for 30.120190026s, waiting for 1m20s Mar 21 23:32:58.187: INFO: node status heartbeat is unchanged for 31.073728496s, waiting for 1m20s Mar 21 23:32:59.216: INFO: node status heartbeat is unchanged for 32.102772362s, waiting for 1m20s Mar 21 23:33:00.130: INFO: node status heartbeat is unchanged for 33.016543605s, waiting for 1m20s Mar 21 23:33:01.500: INFO: node status heartbeat is unchanged for 34.386723684s, waiting for 1m20s Mar 21 23:33:02.221: INFO: node status heartbeat is unchanged for 35.107336307s, waiting for 1m20s Mar 21 23:33:03.398: INFO: node status heartbeat is unchanged for 36.284555741s, waiting for 1m20s Mar 21 23:33:04.481: INFO: node status heartbeat is unchanged for 37.367749843s, waiting for 1m20s Mar 21 23:33:05.453: INFO: node status heartbeat is unchanged for 38.339682003s, waiting for 1m20s Mar 21 23:33:06.652: INFO: node status heartbeat is unchanged for 39.538509205s, waiting for 1m20s Mar 21 23:33:07.900: INFO: node status heartbeat is unchanged for 40.786969536s, waiting for 1m20s Mar 21 23:33:08.616: INFO: node status heartbeat is unchanged for 41.502457967s, waiting for 1m20s Mar 21 23:33:09.547: INFO: node status heartbeat is unchanged for 42.433551316s, waiting for 1m20s Mar 21 23:33:10.344: INFO: node status heartbeat changed in 5m1s, was waiting for at least 40s, success! STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:10.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-9203" for this suite. • [SLOW TEST:44.490 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":58,"completed":23,"skipped":2120,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:11.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Mar 21 23:33:12.751: INFO: Waiting up to 5m0s for pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9" in namespace "security-context-test-7163" to be "Succeeded or Failed" Mar 21 23:33:13.515: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Pending", Reason="", readiness=false. Elapsed: 764.381584ms Mar 21 23:33:15.915: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.164013844s Mar 21 23:33:18.408: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.657390288s Mar 21 23:33:20.692: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.940590927s Mar 21 23:33:23.323: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Running", Reason="", readiness=true. Elapsed: 10.572505038s Mar 21 23:33:25.436: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Running", Reason="", readiness=true. Elapsed: 12.684627877s Mar 21 23:33:27.615: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.864560601s Mar 21 23:33:27.616: INFO: Pod "busybox-user-0-73419abc-792f-476e-8a51-0a5add2fefe9" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:27.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7163" for this suite. • [SLOW TEST:17.335 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":58,"completed":24,"skipped":2181,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:28.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Mar 21 23:33:30.180: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3156" to be "Succeeded or Failed" Mar 21 23:33:30.723: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 543.057279ms Mar 21 23:33:32.790: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.61006269s Mar 21 23:33:35.221: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 5.040841943s Mar 21 23:33:37.772: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 7.592048595s Mar 21 23:33:40.011: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.830569407s Mar 21 23:33:40.011: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:40.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3156" for this suite. • [SLOW TEST:12.006 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":58,"completed":25,"skipped":2257,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:40.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Mar 21 23:33:40.921: INFO: Waiting up to 5m0s for pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056" in namespace "downward-api-2145" to be "Succeeded or Failed" Mar 21 23:33:41.018: INFO: Pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056": Phase="Pending", Reason="", readiness=false. Elapsed: 97.125039ms Mar 21 23:33:43.597: INFO: Pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676046852s Mar 21 23:33:46.117: INFO: Pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056": Phase="Pending", Reason="", readiness=false. Elapsed: 5.196437974s Mar 21 23:33:48.709: INFO: Pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056": Phase="Pending", Reason="", readiness=false. Elapsed: 7.788370347s Mar 21 23:33:51.085: INFO: Pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.163974335s STEP: Saw pod success Mar 21 23:33:51.085: INFO: Pod "downward-api-829b9041-6710-4d29-9538-1de9444fa056" satisfied condition "Succeeded or Failed" Mar 21 23:33:51.278: INFO: Trying to get logs from node latest-worker2 pod downward-api-829b9041-6710-4d29-9538-1de9444fa056 container dapi-container: STEP: delete the pod Mar 21 23:33:53.232: INFO: Waiting for pod downward-api-829b9041-6710-4d29-9538-1de9444fa056 to disappear Mar 21 23:33:53.555: INFO: Pod downward-api-829b9041-6710-4d29-9538-1de9444fa056 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:33:53.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2145" for this suite. • [SLOW TEST:14.042 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":58,"completed":26,"skipped":2355,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pod Container Status should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:33:54.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Mar 21 23:35:15.987: INFO: watch delete seen for pod-submit-status-2-0 Mar 21 23:35:15.987: INFO: Pod pod-submit-status-2-0 on node latest-worker2 timings total=1m20.837267872s t=1.501s run=0s execute=0s Mar 21 23:35:16.557: INFO: watch delete seen for pod-submit-status-1-0 Mar 21 23:35:16.557: INFO: Pod pod-submit-status-1-0 on node latest-worker2 timings total=1m21.407680753s t=491ms run=0s execute=0s Mar 21 23:35:22.550: INFO: watch delete seen for pod-submit-status-0-0 Mar 21 23:35:22.551: INFO: Pod pod-submit-status-0-0 on node latest-worker2 timings total=1m27.401008848s t=1.685s run=0s execute=0s Mar 21 23:36:15.088: INFO: watch delete seen for pod-submit-status-0-1 Mar 21 23:36:15.088: INFO: Pod pod-submit-status-0-1 on node latest-worker2 timings total=52.537287465s t=117ms run=0s execute=0s Mar 21 23:36:16.921: INFO: watch delete seen for pod-submit-status-1-1 Mar 21 23:36:16.921: INFO: Pod pod-submit-status-1-1 on node latest-worker2 timings total=1m0.363823528s t=1.623s run=0s execute=0s Mar 21 23:36:19.761: INFO: watch delete seen for pod-submit-status-2-1 Mar 21 23:36:19.761: INFO: Pod pod-submit-status-2-1 on node latest-worker2 timings total=1m3.774370315s t=1.663s run=0s execute=0s Mar 21 23:37:21.481: INFO: watch delete seen for pod-submit-status-1-2 Mar 21 23:37:21.481: INFO: Pod pod-submit-status-1-2 on node latest-worker timings total=1m4.559724658s t=1.935s run=0s execute=0s Mar 21 23:37:28.448: INFO: watch delete seen for pod-submit-status-2-2 Mar 21 23:37:28.449: INFO: Pod pod-submit-status-2-2 on node latest-worker2 timings total=1m8.687388353s t=1.136s run=0s execute=0s Mar 21 23:37:30.211: INFO: watch delete seen for pod-submit-status-0-2 Mar 21 23:37:30.212: INFO: Pod pod-submit-status-0-2 on node latest-worker2 timings total=1m15.12353096s t=1.034s run=0s execute=0s Mar 21 23:37:33.763: INFO: watch delete seen for pod-submit-status-2-3 Mar 21 23:37:33.764: INFO: Pod pod-submit-status-2-3 on node latest-worker2 timings total=5.314935162s t=1.316s run=0s execute=0s Mar 21 23:38:35.371: INFO: watch delete seen for pod-submit-status-2-4 Mar 21 23:38:35.371: INFO: Pod pod-submit-status-2-4 on node latest-worker2 timings total=1m1.607097412s t=1.228s run=0s execute=0s Mar 21 23:38:35.890: INFO: watch delete seen for pod-submit-status-1-3 Mar 21 23:38:35.890: INFO: Pod pod-submit-status-1-3 on node latest-worker2 timings total=1m14.408793376s t=486ms run=0s execute=0s Mar 21 23:38:37.555: INFO: watch delete seen for pod-submit-status-0-3 Mar 21 23:38:37.555: INFO: Pod pod-submit-status-0-3 on node latest-worker2 timings total=1m7.343284335s t=566ms run=0s execute=0s Mar 21 23:38:46.131: INFO: watch delete seen for pod-submit-status-1-4 Mar 21 23:38:46.131: INFO: Pod pod-submit-status-1-4 on node latest-worker2 timings total=10.240700198s t=1.877s run=0s execute=0s Mar 21 23:39:35.142: INFO: watch delete seen for pod-submit-status-1-5 Mar 21 23:39:35.142: INFO: Pod pod-submit-status-1-5 on node latest-worker2 timings total=49.011284614s t=1.407s run=0s execute=0s Mar 21 23:39:35.331: INFO: watch delete seen for pod-submit-status-2-5 Mar 21 23:39:35.331: INFO: Pod pod-submit-status-2-5 on node latest-worker2 timings total=59.959980642s t=1.667s run=0s execute=0s Mar 21 23:39:36.357: INFO: watch delete seen for pod-submit-status-0-4 Mar 21 23:39:36.357: INFO: Pod pod-submit-status-0-4 on node latest-worker2 timings total=58.802173937s t=416ms run=0s execute=0s Mar 21 23:39:45.603: INFO: watch delete seen for pod-submit-status-2-6 Mar 21 23:39:45.603: INFO: Pod pod-submit-status-2-6 on node latest-worker timings total=10.271802559s t=1.057s run=0s execute=0s Mar 21 23:39:45.819: INFO: watch delete seen for pod-submit-status-0-5 Mar 21 23:39:45.819: INFO: Pod pod-submit-status-0-5 on node latest-worker timings total=9.46214185s t=302ms run=0s execute=0s Mar 21 23:39:47.757: INFO: watch delete seen for pod-submit-status-2-7 Mar 21 23:39:47.757: INFO: Pod pod-submit-status-2-7 on node latest-worker timings total=2.154124807s t=672ms run=0s execute=0s Mar 21 23:39:51.937: INFO: watch delete seen for pod-submit-status-1-6 Mar 21 23:39:51.937: INFO: Pod pod-submit-status-1-6 on node latest-worker timings total=16.79467652s t=828ms run=0s execute=0s Mar 21 23:39:55.782: INFO: watch delete seen for pod-submit-status-0-6 Mar 21 23:39:55.782: INFO: Pod pod-submit-status-0-6 on node latest-worker timings total=9.962557289s t=1.764s run=0s execute=0s Mar 21 23:40:06.046: INFO: watch delete seen for pod-submit-status-0-7 Mar 21 23:40:06.046: INFO: Pod pod-submit-status-0-7 on node latest-worker2 timings total=10.263865086s t=1.566s run=0s execute=0s Mar 21 23:40:13.167: INFO: watch delete seen for pod-submit-status-0-8 Mar 21 23:40:13.167: INFO: Pod pod-submit-status-0-8 on node latest-worker2 timings total=7.12073644s t=1.19s run=0s execute=0s Mar 21 23:40:35.201: INFO: watch delete seen for pod-submit-status-1-7 Mar 21 23:40:35.201: INFO: Pod pod-submit-status-1-7 on node latest-worker2 timings total=43.264096208s t=1.481s run=0s execute=0s Mar 21 23:40:45.464: INFO: watch delete seen for pod-submit-status-2-8 Mar 21 23:40:45.464: INFO: Pod pod-submit-status-2-8 on node latest-worker2 timings total=57.707091412s t=770ms run=0s execute=0s Mar 21 23:40:53.850: INFO: watch delete seen for pod-submit-status-1-8 Mar 21 23:40:53.850: INFO: Pod pod-submit-status-1-8 on node latest-worker timings total=18.649498733s t=1.732s run=0s execute=0s Mar 21 23:40:56.322: INFO: watch delete seen for pod-submit-status-0-9 Mar 21 23:40:56.323: INFO: Pod pod-submit-status-0-9 on node latest-worker timings total=43.155666809s t=1.729s run=0s execute=0s Mar 21 23:41:46.231: INFO: watch delete seen for pod-submit-status-1-9 Mar 21 23:41:46.231: INFO: Pod pod-submit-status-1-9 on node latest-worker2 timings total=52.380227508s t=978ms run=0s execute=0s Mar 21 23:41:55.636: INFO: watch delete seen for pod-submit-status-2-9 Mar 21 23:41:55.636: INFO: Pod pod-submit-status-2-9 on node latest-worker timings total=1m10.171765112s t=826ms run=0s execute=0s Mar 21 23:41:56.022: INFO: watch delete seen for pod-submit-status-0-10 Mar 21 23:41:56.022: INFO: Pod pod-submit-status-0-10 on node latest-worker timings total=59.699784409s t=1.159s run=0s execute=0s Mar 21 23:42:05.519: INFO: watch delete seen for pod-submit-status-2-10 Mar 21 23:42:05.519: INFO: Pod pod-submit-status-2-10 on node latest-worker timings total=9.88350023s t=1.598s run=0s execute=0s Mar 21 23:42:55.423: INFO: watch delete seen for pod-submit-status-1-10 Mar 21 23:42:55.423: INFO: Pod pod-submit-status-1-10 on node latest-worker timings total=1m9.192009075s t=1.881s run=0s execute=0s Mar 21 23:42:55.622: INFO: watch delete seen for pod-submit-status-0-11 Mar 21 23:42:55.622: INFO: Pod pod-submit-status-0-11 on node latest-worker timings total=59.599155178s t=1.048s run=0s execute=0s Mar 21 23:42:56.180: INFO: watch delete seen for pod-submit-status-2-11 Mar 21 23:42:56.180: INFO: Pod pod-submit-status-2-11 on node latest-worker timings total=50.660405357s t=1.126s run=0s execute=0s Mar 21 23:42:58.296: INFO: watch delete seen for pod-submit-status-2-12 Mar 21 23:42:58.296: INFO: Pod pod-submit-status-2-12 on node latest-worker timings total=2.11559837s t=130ms run=0s execute=0s Mar 21 23:43:06.573: INFO: watch delete seen for pod-submit-status-0-12 Mar 21 23:43:06.573: INFO: Pod pod-submit-status-0-12 on node latest-worker timings total=10.951117865s t=1.573s run=0s execute=0s Mar 21 23:43:55.580: INFO: watch delete seen for pod-submit-status-0-13 Mar 21 23:43:55.581: INFO: Pod pod-submit-status-0-13 on node latest-worker timings total=49.007551696s t=1.633s run=0s execute=0s Mar 21 23:44:05.698: INFO: watch delete seen for pod-submit-status-0-14 Mar 21 23:44:05.698: INFO: Pod pod-submit-status-0-14 on node latest-worker timings total=10.117470869s t=1.491s run=0s execute=0s Mar 21 23:44:06.242: INFO: watch delete seen for pod-submit-status-2-13 Mar 21 23:44:06.243: INFO: Pod pod-submit-status-2-13 on node latest-worker timings total=1m7.946836322s t=331ms run=0s execute=0s Mar 21 23:44:06.377: INFO: watch delete seen for pod-submit-status-1-11 Mar 21 23:44:06.378: INFO: Pod pod-submit-status-1-11 on node latest-worker timings total=1m10.954723134s t=835ms run=0s execute=0s Mar 21 23:44:15.313: INFO: watch delete seen for pod-submit-status-1-12 Mar 21 23:44:15.313: INFO: Pod pod-submit-status-1-12 on node latest-worker timings total=8.935529028s t=1.892s run=0s execute=0s Mar 21 23:44:25.399: INFO: watch delete seen for pod-submit-status-1-13 Mar 21 23:44:25.399: INFO: Pod pod-submit-status-1-13 on node latest-worker timings total=10.086135017s t=93ms run=0s execute=0s Mar 21 23:44:35.335: INFO: watch delete seen for pod-submit-status-1-14 Mar 21 23:44:35.336: INFO: Pod pod-submit-status-1-14 on node latest-worker timings total=9.936077188s t=1.419s run=0s execute=0s Mar 21 23:45:06.739: INFO: watch delete seen for pod-submit-status-2-14 Mar 21 23:45:06.739: INFO: Pod pod-submit-status-2-14 on node latest-worker timings total=1m0.496892656s t=1.583s run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:45:06.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4168" for this suite. • [SLOW TEST:672.891 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":58,"completed":27,"skipped":2559,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:45:07.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Mar 21 23:45:07.813: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:45:07.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-5412" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.954 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:275 ------------------------------ [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:45:08.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Mar 21 23:45:09.041: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:46:09.057: INFO: Waiting for terminating namespaces to be deleted... [It] doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 Mar 21 23:46:09.217: INFO: Starting informer... STEP: Starting pod... Mar 21 23:46:10.623: INFO: Pod is running on latest-worker2. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Mar 21 23:47:15.788: INFO: Pod wasn't evicted. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:47:15.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-8788" for this suite. • [SLOW TEST:127.926 seconds] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes","total":58,"completed":28,"skipped":2633,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:47:16.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 STEP: Creating pod startup-d51df19e-6906-442e-9a74-8c2937f39f7d in namespace container-probe-3232 Mar 21 23:47:23.777: INFO: Started pod startup-d51df19e-6906-442e-9a74-8c2937f39f7d in namespace container-probe-3232 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 23:47:23.825: INFO: Initial restart count of pod startup-d51df19e-6906-442e-9a74-8c2937f39f7d is 0 Mar 21 23:48:26.381: INFO: Restart count of pod container-probe-3232/startup-d51df19e-6906-442e-9a74-8c2937f39f7d is now 1 (1m2.555993747s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:26.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3232" for this suite. • [SLOW TEST:70.345 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":58,"completed":29,"skipped":2806,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:26.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:33.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2255" for this suite. • [SLOW TEST:6.938 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":58,"completed":30,"skipped":2867,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:33.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:48:33.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-535" for this suite. •{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":58,"completed":31,"skipped":2958,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:48:33.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Mar 21 23:48:34.008: INFO: Waiting up to 1m0s for all nodes to be ready Mar 21 23:49:34.030: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 Mar 21 23:49:34.033: INFO: Starting informer... STEP: Starting pod... Mar 21 23:49:34.246: INFO: Pod is running on latest-worker. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Mar 21 23:50:17.750: INFO: Noticed Pod eviction. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:50:17.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-127" for this suite. • [SLOW TEST:104.234 seconds] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes","total":58,"completed":32,"skipped":3207,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:50:18.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 STEP: Creating pod liveness-1c573071-af64-461d-8a69-6177fa223edd in namespace container-probe-4543 Mar 21 23:50:26.249: INFO: Started pod liveness-1c573071-af64-461d-8a69-6177fa223edd in namespace container-probe-4543 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 23:50:26.323: INFO: Initial restart count of pod liveness-1c573071-af64-461d-8a69-6177fa223edd is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:54:27.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4543" for this suite. • [SLOW TEST:249.925 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":58,"completed":33,"skipped":3225,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:54:27.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod startup-dcf22de0-3bf3-4ab5-b8ab-d7c897c14a4d in namespace container-probe-8824 Mar 21 23:54:35.158: INFO: Started pod startup-dcf22de0-3bf3-4ab5-b8ab-d7c897c14a4d in namespace container-probe-8824 STEP: checking the pod's current state and verifying that restartCount is present Mar 21 23:54:35.374: INFO: Initial restart count of pod startup-dcf22de0-3bf3-4ab5-b8ab-d7c897c14a4d is 0 Mar 21 23:56:17.034: INFO: Restart count of pod container-probe-8824/startup-dcf22de0-3bf3-4ab5-b8ab-d7c897c14a4d is now 1 (1m41.660179731s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:56:17.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8824" for this suite. • [SLOW TEST:109.367 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":58,"completed":34,"skipped":3266,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Secret should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:56:17.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Mar 21 23:56:17.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-4642 create -f -' Mar 21 23:56:23.196: INFO: stderr: "" Mar 21 23:56:23.196: INFO: stdout: "secret/test-secret created\n" Mar 21 23:56:23.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-4642 create -f -' Mar 21 23:56:23.599: INFO: stderr: "" Mar 21 23:56:23.599: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Mar 21 23:56:29.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-4642 logs secret-test-pod test-container' Mar 21 23:56:29.879: INFO: stderr: "" Mar 21 23:56:29.879: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 21 23:56:29.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-4642" for this suite. • [SLOW TEST:12.689 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":58,"completed":35,"skipped":3341,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSS ------------------------------ [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 21 23:56:29.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 Mar 21 23:56:30.313: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:56:32.743: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 21 23:56:34.382: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Mar 21 23:57:41.199: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-03-21 23:57:09 +0000 UTC restartedAt=2021-03-21 23:57:39 +0000 UTC (30s) STEP: getting restart delay-1 Mar 21 23:58:42.685: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-03-21 23:57:44 +0000 UTC restartedAt=2021-03-21 23:58:40 +0000 UTC (56s) STEP: getting restart delay-2 Mar 22 00:00:10.242: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-03-21 23:58:45 +0000 UTC restartedAt=2021-03-22 00:00:09 +0000 UTC (1m24s) STEP: updating the image Mar 22 00:00:10.811: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Mar 22 00:00:41.099: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-03-22 00:00:23 +0000 UTC restartedAt=2021-03-22 00:00:39 +0000 UTC (16s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:41.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7177" for this suite. • [SLOW TEST:251.280 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":58,"completed":36,"skipped":3346,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:41.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:43.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-2662" for this suite. •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":58,"completed":37,"skipped":3580,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeProblemDetector should run without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:43.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Mar 22 00:00:43.827: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:00:43.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-2041" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.256 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:00:43.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-multiple-pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 Mar 22 00:00:44.205: INFO: Waiting up to 1m0s for all nodes to be ready Mar 22 00:01:44.227: INFO: Waiting for terminating namespaces to be deleted... [It] only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 Mar 22 00:01:44.305: INFO: Starting informer... STEP: Starting pods... Mar 22 00:01:45.100: INFO: Pod1 is running on latest-worker2. Tainting Node Mar 22 00:01:45.413: INFO: Pod2 is running on latest-worker. Tainting Node STEP: Trying to apply a taint on the Nodes STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 to be deleted Mar 22 00:02:05.484: INFO: Noticed Pod "taint-eviction-a1" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:02:51.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-4879" for this suite. • [SLOW TEST:128.026 seconds] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes","total":58,"completed":38,"skipped":3722,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:02:51.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 Mar 22 00:02:52.712: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:02:54.766: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:02:56.791: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:02:59.354: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:03:01.216: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:03:03.402: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Mar 22 00:14:43.034: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-03-22 00:09:29 +0000 UTC restartedAt=2021-03-22 00:14:41 +0000 UTC (5m12s) Mar 22 00:19:55.951: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-03-22 00:14:46 +0000 UTC restartedAt=2021-03-22 00:19:54 +0000 UTC (5m8s) Mar 22 00:25:03.917: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-03-22 00:19:59 +0000 UTC restartedAt=2021-03-22 00:25:03 +0000 UTC (5m4s) STEP: getting restart delay after a capped delay Mar 22 00:30:17.648: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-03-22 00:25:08 +0000 UTC restartedAt=2021-03-22 00:30:16 +0000 UTC (5m8s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:17.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5826" for this suite. • [SLOW TEST:1645.677 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":58,"completed":39,"skipped":3872,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} S ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:17.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:23.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8673" for this suite. • [SLOW TEST:6.164 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":58,"completed":40,"skipped":3873,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:23.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 STEP: Creating pod liveness-7c086c96-79ce-4526-b17e-eae0db8960f6 in namespace container-probe-8759 Mar 22 00:30:27.947: INFO: Started pod liveness-7c086c96-79ce-4526-b17e-eae0db8960f6 in namespace container-probe-8759 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:30:27.950: INFO: Initial restart count of pod liveness-7c086c96-79ce-4526-b17e-eae0db8960f6 is 0 Mar 22 00:30:50.312: INFO: Restart count of pod container-probe-8759/liveness-7c086c96-79ce-4526-b17e-eae0db8960f6 is now 1 (22.361718462s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:50.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8759" for this suite. • [SLOW TEST:26.904 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":58,"completed":41,"skipped":3977,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:50.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Mar 22 00:30:51.391: INFO: Waiting up to 5m0s for pod "pod-always-succeedb4624881-e139-4f93-a042-c814ebb61fe5" in namespace "pods-7166" to be "Succeeded or Failed" Mar 22 00:30:51.405: INFO: Pod "pod-always-succeedb4624881-e139-4f93-a042-c814ebb61fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.382977ms Mar 22 00:30:53.549: INFO: Pod "pod-always-succeedb4624881-e139-4f93-a042-c814ebb61fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157888661s Mar 22 00:30:55.554: INFO: Pod "pod-always-succeedb4624881-e139-4f93-a042-c814ebb61fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163046743s Mar 22 00:30:57.560: INFO: Pod "pod-always-succeedb4624881-e139-4f93-a042-c814ebb61fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168297039s STEP: Saw pod success Mar 22 00:30:57.560: INFO: Pod "pod-always-succeedb4624881-e139-4f93-a042-c814ebb61fe5" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:30:59.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7166" for this suite. • [SLOW TEST:8.940 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":58,"completed":42,"skipped":4304,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:30:59.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:31:05.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1964" for this suite. • [SLOW TEST:5.509 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":58,"completed":43,"skipped":4307,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:31:05.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:31:21.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4686" for this suite. • [SLOW TEST:16.178 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":58,"completed":44,"skipped":4434,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:31:21.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 STEP: Creating pod startup-68a57213-12f5-4cbd-92c9-c29739439f99 in namespace container-probe-7420 Mar 22 00:31:25.491: INFO: Started pod startup-68a57213-12f5-4cbd-92c9-c29739439f99 in namespace container-probe-7420 STEP: checking the pod's current state and verifying that restartCount is present Mar 22 00:31:25.494: INFO: Initial restart count of pod startup-68a57213-12f5-4cbd-92c9-c29739439f99 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:35:26.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7420" for this suite. • [SLOW TEST:244.700 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":58,"completed":45,"skipped":4652,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:35:26.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:35:28.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5061" for this suite. •{"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":58,"completed":46,"skipped":4778,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:35:28.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Mar 22 00:35:53.109: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:35:53.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3230" for this suite. • [SLOW TEST:24.968 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":58,"completed":47,"skipped":4835,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:35:53.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Mar 22 00:35:53.237: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:35:53.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-9626" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.171 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:275 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:35:53.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Mar 22 00:35:53.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-2745 create -f -' Mar 22 00:35:59.157: INFO: stderr: "" Mar 22 00:35:59.157: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Mar 22 00:36:05.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-2745 logs dapi-test-pod test-container' Mar 22 00:36:05.434: INFO: stderr: "" Mar 22 00:36:05.434: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2745\nMY_POD_IP=10.244.1.125\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.13\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Mar 22 00:36:05.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:41865 --kubeconfig=/root/.kube/config --namespace=examples-2745 logs dapi-test-pod test-container' Mar 22 00:36:05.607: INFO: stderr: "" Mar 22 00:36:05.607: INFO: stdout: "KUBERNETES_SERVICE_PORT=443\nKUBERNETES_PORT=tcp://10.96.0.1:443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-2745\nMY_POD_IP=10.244.1.125\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.13\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:05.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-2745" for this suite. • [SLOW TEST:12.320 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":58,"completed":48,"skipped":5132,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSS ------------------------------ [sig-node] Pods Extended Delete Grace Period should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:36:05.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 22 00:36:17.193: INFO: start=2021-03-22 00:36:12.179403942 +0000 UTC m=+4499.222949317, now=2021-03-22 00:36:17.193112361 +0000 UTC m=+4504.236657786, kubelet pod: {"metadata":{"name":"pod-submit-remove-cdd54805-f8e1-44a9-a17a-a70d711bb26c","namespace":"pods-8240","uid":"40a93185-745f-4f65-a4f4-240d58619674","resourceVersion":"6998824","creationTimestamp":"2021-03-22T00:36:06Z","deletionTimestamp":"2021-03-22T00:36:42Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"17701130"},"annotations":{"kubernetes.io/config.seen":"2021-03-22T00:36:06.091047091Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-22T00:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-8qhqj","secret":{"secretName":"default-token-8qhqj","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-8qhqj","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:06Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:14Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:14Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:06Z"}],"hostIP":"172.18.0.13","podIP":"10.244.1.126","podIPs":[{"ip":"10.244.1.126"}],"startTime":"2021-03-22T00:36:06Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 22 00:36:22.189: INFO: start=2021-03-22 00:36:12.179403942 +0000 UTC m=+4499.222949317, now=2021-03-22 00:36:22.189280248 +0000 UTC m=+4509.232825614, kubelet pod: {"metadata":{"name":"pod-submit-remove-cdd54805-f8e1-44a9-a17a-a70d711bb26c","namespace":"pods-8240","uid":"40a93185-745f-4f65-a4f4-240d58619674","resourceVersion":"6998824","creationTimestamp":"2021-03-22T00:36:06Z","deletionTimestamp":"2021-03-22T00:36:42Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"17701130"},"annotations":{"kubernetes.io/config.seen":"2021-03-22T00:36:06.091047091Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-22T00:36:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-8qhqj","secret":{"secretName":"default-token-8qhqj","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-8qhqj","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker2","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:06Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:14Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:14Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-22T00:36:06Z"}],"hostIP":"172.18.0.13","podIP":"10.244.1.126","podIPs":[{"ip":"10.244.1.126"}],"startTime":"2021-03-22T00:36:06Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 22 00:36:27.186: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:27.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8240" for this suite. • [SLOW TEST:21.581 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":58,"completed":49,"skipped":5141,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:36:27.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Mar 22 00:36:27.342: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:36:29.347: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:36:31.347: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 22 00:36:33.348: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Mar 22 00:36:33.352: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-7868 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:36:33.352: INFO: >>> kubeConfig: /root/.kube/config Mar 22 00:36:33.466: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-7868 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:36:33.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Mar 22 00:36:33.612: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-7868 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 22 00:36:33.612: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:33.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-7868" for this suite. • [SLOW TEST:6.534 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":58,"completed":50,"skipped":5158,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:36:33.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 22 00:36:33.925: INFO: Waiting up to 5m0s for pod "security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4" in namespace "security-context-6528" to be "Succeeded or Failed" Mar 22 00:36:33.967: INFO: Pod "security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.552711ms Mar 22 00:36:35.978: INFO: Pod "security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052340494s Mar 22 00:36:37.989: INFO: Pod "security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063558634s STEP: Saw pod success Mar 22 00:36:37.989: INFO: Pod "security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4" satisfied condition "Succeeded or Failed" Mar 22 00:36:37.992: INFO: Trying to get logs from node latest-worker2 pod security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4 container test-container: STEP: delete the pod Mar 22 00:36:38.014: INFO: Waiting for pod security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4 to disappear Mar 22 00:36:38.035: INFO: Pod security-context-8576fcae-5a84-4a0a-933e-4ab33ab706b4 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:38.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6528" for this suite. •{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":58,"completed":51,"skipped":5403,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 22 00:36:38.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 22 00:36:43.245: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 22 00:36:43.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-367" for this suite. • [SLOW TEST:5.261 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":58,"completed":52,"skipped":5506,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 22 00:36:43.310: INFO: Running AfterSuite actions on all nodes Mar 22 00:36:43.310: INFO: Running AfterSuite actions on node 1 Mar 22 00:36:43.310: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_node/junit_01.xml {"msg":"Test Suite completed","total":58,"completed":52,"skipped":5684,"failed":1,"failures":["[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} Summarizing 1 Failure: [Fail] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 53 of 5737 Specs in 4528.898 seconds FAIL! -- 52 Passed | 1 Failed | 0 Pending | 5684 Skipped --- FAIL: TestE2E (4528.99s) FAIL