I0325 11:09:22.329271 8 e2e.go:129] Starting e2e run "b0ce9669-6180-4264-b971-e06b155fecd9" on Ginkgo node 1 {"msg":"Test Suite starting","total":58,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1616670560 - Will randomize all specs Will run 58 of 5737 specs Mar 25 11:09:22.404: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:09:22.407: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 25 11:09:22.614: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 25 11:09:24.556: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (1 seconds elapsed) Mar 25 11:09:24.556: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 25 11:09:24.556: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 25 11:09:25.703: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (1 seconds elapsed) Mar 25 11:09:25.703: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (1 seconds elapsed) Mar 25 11:09:25.703: INFO: e2e test version: v1.21.0-beta.1 Mar 25 11:09:25.704: INFO: kube-apiserver version: v1.21.0-alpha.0 Mar 25 11:09:25.704: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:09:26.521: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:09:26.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Mar 25 11:09:27.442: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 STEP: create image pull secret STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:09:46.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5792" for this suite. • [SLOW TEST:20.890 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull from private registry with secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:393 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]","total":58,"completed":1,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:09:47.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should update ConfigMap successfully /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/configmap.go:140 STEP: Creating ConfigMap configmap-6885/configmap-test-33cd4033-ce48-470b-9d74-1f9fa7bdd8ed STEP: Updating configMap configmap-6885/configmap-test-33cd4033-ce48-470b-9d74-1f9fa7bdd8ed STEP: Verifying update of ConfigMap configmap-6885/configmap-test-33cd4033-ce48-470b-9d74-1f9fa7bdd8ed [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:09:47.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6885" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":58,"completed":2,"skipped":69,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:09:48.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 25 11:10:05.263: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:10:08.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4370" for this suite. • [SLOW TEST:22.597 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:171 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":58,"completed":3,"skipped":85,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:10:10.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 Mar 25 11:10:15.767: INFO: Found ClusterRoles; assuming RBAC is enabled. [It] liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 Mar 25 11:10:16.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-5345 create -f -' Mar 25 11:10:30.617: INFO: stderr: "" Mar 25 11:10:30.617: INFO: stdout: "pod/liveness-exec created\n" Mar 25 11:10:30.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-5345 create -f -' Mar 25 11:10:31.569: INFO: stderr: "" Mar 25 11:10:31.569: INFO: stdout: "pod/liveness-http created\n" STEP: Check restarts Mar 25 11:10:40.091: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:42.776: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:43.622: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:45.808: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:46.568: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:48.318: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:48.766: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:51.056: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:51.497: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:53.867: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:54.341: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:56.683: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:10:56.683: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:58.772: INFO: Pod: liveness-http, restart count:0 Mar 25 11:10:58.772: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:01.304: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:01.304: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:03.693: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:03.693: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:06.263: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:06.263: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:08.467: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:08.467: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:10.819: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:10.821: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:12.935: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:12.935: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:14.941: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:14.941: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:17.999: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:18.001: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:20.114: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:20.114: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:22.484: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:22.484: INFO: Pod: liveness-http, restart count:0 Mar 25 11:11:25.145: INFO: Pod: liveness-http, restart count:1 Mar 25 11:11:25.145: INFO: Saw liveness-http restart, succeeded... Mar 25 11:11:25.149: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:27.421: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:29.670: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:31.682: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:34.589: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:37.087: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:39.264: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:41.879: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:44.217: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:46.949: INFO: Pod: liveness-exec, restart count:0 Mar 25 11:11:49.363: INFO: Pod: liveness-exec, restart count:1 Mar 25 11:11:49.364: INFO: Saw liveness-exec restart, succeeded... [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:11:49.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-5345" for this suite. • [SLOW TEST:99.316 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Liveness /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:66 liveness pods should be automatically restarted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:67 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted","total":58,"completed":4,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:11:50.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:11:55.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1208" for this suite. • [SLOW TEST:6.820 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run with an explicit root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:139 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]","total":58,"completed":5,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:11:56.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Mar 25 11:11:58.720: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:12:58.750: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 Mar 25 11:12:59.156: INFO: Starting informer... STEP: Starting pod... Mar 25 11:12:59.775: INFO: Pod is running on latest-worker. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Mar 25 11:13:07.712: INFO: Noticed Pod eviction. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:13:08.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-8008" for this suite. • [SLOW TEST:73.274 seconds] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 evicts pods from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:177 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes","total":58,"completed":6,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:13:10.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: waiting for pod running STEP: deleting the pod gracefully STEP: verifying the pod is running while in the graceful period termination Mar 25 11:13:42.752: INFO: pod is running [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:13:42.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2714" for this suite. • [SLOW TEST:32.937 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 graceful pod terminated should wait until preStop hook completes the process /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:170 ------------------------------ {"msg":"PASSED [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":58,"completed":7,"skipped":676,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:13:43.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 STEP: Creating a RC of 20 pods and wait until all pods of this RC are running STEP: creating replication controller cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a in namespace kubelet-3984 I0325 11:13:44.569617 8 runners.go:190] Created replication controller with name: cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a, namespace: kubelet-3984, replica count: 20 Mar 25 11:13:44.690: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:13:44.691: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:13:44.735: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:13:50.700: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:13:50.727: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:13:50.998: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0325 11:13:54.620678 8 runners.go:190] cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:13:56.818: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:13:56.845: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:13:57.241: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:02.912: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:03.997: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0325 11:14:04.621006 8 runners.go:190] cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a Pods: 20 out of 20 created, 0 running, 20 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:14:05.006: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:08.220: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:09.807: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:12.125: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:13.878: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" I0325 11:14:14.621876 8 runners.go:190] cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a Pods: 20 out of 20 created, 5 running, 15 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:14:15.445: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:20.004: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:20.564: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:20.679: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0325 11:14:24.622207 8 runners.go:190] cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a Pods: 20 out of 20 created, 16 running, 4 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:14:25.416: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:25.795: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:25.893: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:31.048: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:31.268: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:31.303: INFO: Missing info/stats for container "runtime" on node "latest-worker2" I0325 11:14:34.622980 8 runners.go:190] cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a Pods: 20 out of 20 created, 20 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 25 11:14:35.624: INFO: Checking pods on node latest-worker2 via /runningpods endpoint Mar 25 11:14:35.624: INFO: Checking pods on node latest-worker via /runningpods endpoint Mar 25 11:14:36.810: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:36.859: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:36.870: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:36.980: INFO: [Resource usage on node "latest-control-plane" is not ready yet, Resource usage on node "latest-worker" is not ready yet, Resource usage on node "latest-worker2" is not ready yet] Mar 25 11:14:36.980: INFO: STEP: Deleting the RC STEP: deleting ReplicationController cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a in namespace kubelet-3984, will wait for the garbage collector to delete the pods Mar 25 11:14:38.538: INFO: Deleting ReplicationController cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a took: 792.036471ms Mar 25 11:14:40.339: INFO: Terminating ReplicationController cleanup20-8ca424c6-4175-47d4-bc7d-53927cd83a8a pods took: 1.800827947s Mar 25 11:14:43.049: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:43.079: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:43.490: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:48.571: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:48.576: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:49.875: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:54.167: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:14:54.458: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:14:55.842: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:14:59.630: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:01.458: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:04.946: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:05.102: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:06.617: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:10.634: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:10.971: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:12.222: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:16.293: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:16.309: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:17.770: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:21.713: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:21.727: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:23.797: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:27.706: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:27.725: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:29.441: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:33.161: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:33.196: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:34.598: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:39.418: INFO: Missing info/stats for container "runtime" on node "latest-worker2" Mar 25 11:15:39.450: INFO: Missing info/stats for container "runtime" on node "latest-control-plane" Mar 25 11:15:40.315: INFO: Missing info/stats for container "runtime" on node "latest-worker" Mar 25 11:15:40.440: INFO: Checking pods on node latest-worker2 via /runningpods endpoint Mar 25 11:15:40.440: INFO: Checking pods on node latest-worker via /runningpods endpoint Mar 25 11:15:40.947: INFO: Deleting 20 pods on 2 nodes completed in 1.507525722s after the RC was deleted Mar 25 11:15:40.947: INFO: CPU usage of containers on node "latest-control-plane" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.609 0.727 0.738 0.827 0.846 0.846 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "latest-worker" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.512 0.600 0.697 0.770 0.772 0.772 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 CPU usage of containers on node "latest-worker2" :container 5th% 20th% 50th% 70th% 90th% 95th% 99th% "/" 0.000 0.605 0.715 0.761 0.774 0.821 0.821 "runtime" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 "kubelet" 0.000 0.000 0.000 0.000 0.000 0.000 0.000 [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 STEP: removing the label kubelet_cleanup off the node latest-worker2 STEP: verifying the node doesn't have the label kubelet_cleanup STEP: removing the label kubelet_cleanup off the node latest-worker STEP: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:15:43.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-3984" for this suite. • [SLOW TEST:121.170 seconds] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:279 kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 ------------------------------ {"msg":"PASSED [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.","total":58,"completed":8,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 [BeforeEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:15:44.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-privileged-pod STEP: Waiting for a default service account to be provisioned in namespace [It] should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 STEP: Creating a pod with a privileged container Mar 25 11:15:46.698: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:15:48.970: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:15:51.253: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:15:52.993: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:15:54.764: INFO: The status of Pod privileged-pod is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:15:57.061: INFO: The status of Pod privileged-pod is Running (Ready = true) STEP: Executing in the privileged container Mar 25 11:15:57.575: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2773 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:15:57.575: INFO: >>> kubeConfig: /root/.kube/config Mar 25 11:15:57.720: INFO: ExecWithOptions {Command:[ip link del dummy1] Namespace:e2e-privileged-pod-2773 PodName:privileged-pod ContainerName:privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:15:57.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Executing in the non-privileged container Mar 25 11:15:58.014: INFO: ExecWithOptions {Command:[ip link add dummy1 type dummy] Namespace:e2e-privileged-pod-2773 PodName:privileged-pod ContainerName:not-privileged-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 11:15:58.015: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:15:58.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-privileged-pod-2773" for this suite. • [SLOW TEST:14.945 seconds] [sig-node] PrivilegedPod [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should enable privileged commands [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/privileged.go:49 ------------------------------ {"msg":"PASSED [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":58,"completed":9,"skipped":1134,"failed":0} SSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:15:59.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 Mar 25 11:15:59.738: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-5897" to be "Succeeded or Failed" Mar 25 11:16:00.103: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 364.938704ms Mar 25 11:16:02.534: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.795906683s Mar 25 11:16:04.930: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 5.191955107s Mar 25 11:16:07.128: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.389323665s Mar 25 11:16:07.128: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:08.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5897" for this suite. • [SLOW TEST:11.473 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an image specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:151 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":58,"completed":10,"skipped":1137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:10.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 Mar 25 11:16:12.869: INFO: Waiting up to 5m0s for pod "busybox-user-0-8e39c74f-0d0d-4c60-80c1-5e9ee97bd5e8" in namespace "security-context-test-8212" to be "Succeeded or Failed" Mar 25 11:16:12.972: INFO: Pod "busybox-user-0-8e39c74f-0d0d-4c60-80c1-5e9ee97bd5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 102.73449ms Mar 25 11:16:15.804: INFO: Pod "busybox-user-0-8e39c74f-0d0d-4c60-80c1-5e9ee97bd5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934365253s Mar 25 11:16:17.877: INFO: Pod "busybox-user-0-8e39c74f-0d0d-4c60-80c1-5e9ee97bd5e8": Phase="Running", Reason="", readiness=true. Elapsed: 5.007907154s Mar 25 11:16:20.002: INFO: Pod "busybox-user-0-8e39c74f-0d0d-4c60-80c1-5e9ee97bd5e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.132397322s Mar 25 11:16:20.002: INFO: Pod "busybox-user-0-8e39c74f-0d0d-4c60-80c1-5e9ee97bd5e8" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:20.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8212" for this suite. • [SLOW TEST:9.482 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:99 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]","total":58,"completed":11,"skipped":1241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:20.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:26.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1000" for this suite. • [SLOW TEST:7.824 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull from private registry without secret [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:388 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":58,"completed":12,"skipped":1781,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] SSH should SSH to all nodes and run commands /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:28.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ssh STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:36 Mar 25 11:16:28.627: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:28.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ssh-4014" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [0.885 seconds] [sig-node] SSH /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should SSH to all nodes and run commands [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:45 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/ssh.go:42 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] crictl should be able to run crictl on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:28.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crictl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:33 Mar 25 11:16:29.304: INFO: Only supported for providers [gce gke] (not local) [AfterEach] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:16:29.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crictl-1721" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.879 seconds] [sig-node] crictl /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be able to run crictl on the node [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:40 Only supported for providers [gce gke] (not local) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/crictl.go:35 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:16:30.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Mar 25 11:16:31.417: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 11:17:31.444: INFO: Waiting for terminating namespaces to be deleted... [It] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 Mar 25 11:17:31.506: INFO: Starting informer... STEP: Starting pod... Mar 25 11:17:33.981: INFO: Pod is running on latest-worker. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting to see if a Pod won't be deleted Mar 25 11:18:40.488: INFO: Pod wasn't evicted STEP: Waiting for Pod to be deleted Mar 25 11:19:45.489: FAIL: Pod wasn't evicted Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0031ba900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0031ba900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0031ba900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "taint-single-pod-1609". STEP: Found 6 events. Mar 25 11:19:45.915: INFO: At 2021-03-25 11:17:33 +0000 UTC - event for taint-eviction-3: {default-scheduler } Scheduled: Successfully assigned taint-single-pod-1609/taint-eviction-3 to latest-worker Mar 25 11:19:45.915: INFO: At 2021-03-25 11:17:37 +0000 UTC - event for taint-eviction-3: {kubelet latest-worker} Pulled: Container image "k8s.gcr.io/pause:3.4.1" already present on machine Mar 25 11:19:45.915: INFO: At 2021-03-25 11:17:41 +0000 UTC - event for taint-eviction-3: {kubelet latest-worker} Created: Created container pause Mar 25 11:19:45.915: INFO: At 2021-03-25 11:17:41 +0000 UTC - event for taint-eviction-3: {kubelet latest-worker} Started: Started container pause Mar 25 11:19:45.915: INFO: At 2021-03-25 11:18:44 +0000 UTC - event for taint-eviction-3: {taint-controller } TaintManagerEviction: Marking for deletion Pod taint-single-pod-1609/taint-eviction-3 Mar 25 11:19:45.915: INFO: At 2021-03-25 11:18:45 +0000 UTC - event for taint-eviction-3: {kubelet latest-worker} Killing: Stopping container pause Mar 25 11:19:46.027: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 11:19:46.027: INFO: Mar 25 11:19:46.208: INFO: Logging node info for node latest-control-plane Mar 25 11:19:46.344: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1099787 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:18:48 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:19:46.344: INFO: Logging kubelet events for node latest-control-plane Mar 25 11:19:46.427: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 11:19:46.558: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.558: INFO: Container etcd ready: true, restart count 0 Mar 25 11:19:46.558: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.558: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 11:19:46.558: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.558: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:19:46.558: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.558: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:19:46.558: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.559: INFO: Container coredns ready: true, restart count 0 Mar 25 11:19:46.559: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.559: INFO: Container coredns ready: true, restart count 0 Mar 25 11:19:46.559: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.559: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 11:19:46.559: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.559: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 11:19:46.559: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:46.559: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 11:19:46.723183 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:19:46.970: INFO: Latency metrics for node latest-control-plane Mar 25 11:19:46.970: INFO: Logging node info for node latest-worker Mar 25 11:19:47.128: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1100582 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 10:58:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:17:33 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:19:47.130: INFO: Logging kubelet events for node latest-worker Mar 25 11:19:47.406: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 11:19:47.730: INFO: ss-0 started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:47.730: INFO: Container webserver ready: false, restart count 0 Mar 25 11:19:47.730: INFO: kindnet-bpcmh started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:47.730: INFO: Container kindnet-cni ready: false, restart count 0 Mar 25 11:19:47.730: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:47.730: INFO: Container kube-proxy ready: true, restart count 0 W0325 11:19:47.866513 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:19:49.443: INFO: Latency metrics for node latest-worker Mar 25 11:19:49.443: INFO: Logging node info for node latest-worker2 Mar 25 11:19:49.981: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1099773 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux daemonset-color:green io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:56:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 10:58:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {e2e.test Update v1 2021-03-25 11:18:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:daemonset-color":{}}},"f:status":{"f:capacity":{"f:example.com/fakePTSRes":{},"f:example.com/fakecpu":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakePTSRes: {{10 0} {} 10 DecimalSI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:18:08 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:19:49.982: INFO: Logging kubelet events for node latest-worker2 Mar 25 11:19:50.762: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 11:19:51.202: INFO: execpod-affinitypv5ms started at 2021-03-25 11:19:29 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:19:51.202: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:19:51.202: INFO: affinity-nodeport-timeout-vmcnm started at 2021-03-25 11:19:20 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Mar 25 11:19:51.202: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:19:51.202: INFO: ss-2 started at 2021-03-25 11:16:40 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container webserver ready: true, restart count 0 Mar 25 11:19:51.202: INFO: affinity-nodeport-timeout-f4nrv started at 2021-03-25 11:19:20 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Mar 25 11:19:51.202: INFO: affinity-nodeport-timeout-9wn94 started at 2021-03-25 11:19:20 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Mar 25 11:19:51.202: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:19:51.202: INFO: daemon-set-dksnv started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:19:51.202: INFO: Container app ready: false, restart count 0 W0325 11:19:51.761628 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:19:52.231: INFO: Latency metrics for node latest-worker2 Mar 25 11:19:52.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-1609" for this suite. • Failure [202.526 seconds] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 eventually evict pod with finite tolerations from tainted nodes [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:242 Mar 25 11:19:45.489: Pod wasn't evicted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","total":58,"completed":12,"skipped":1944,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:19:53.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups Mar 25 11:19:54.910: INFO: Waiting up to 5m0s for pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1" in namespace "security-context-546" to be "Succeeded or Failed" Mar 25 11:19:54.925: INFO: Pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.892673ms Mar 25 11:19:57.649: INFO: Pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.7396852s Mar 25 11:19:59.728: INFO: Pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.818068438s Mar 25 11:20:02.015: INFO: Pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.105530814s Mar 25 11:20:04.122: INFO: Pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.212342675s STEP: Saw pod success Mar 25 11:20:04.122: INFO: Pod "security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1" satisfied condition "Succeeded or Failed" Mar 25 11:20:04.302: INFO: Trying to get logs from node latest-worker pod security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1 container test-container: STEP: delete the pod Mar 25 11:20:04.885: INFO: Waiting for pod security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1 to disappear Mar 25 11:20:04.910: INFO: Pod security-context-de6de355-267e-4689-aa3b-67ef85c3f2e1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:20:04.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-546" for this suite. • [SLOW TEST:11.720 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:69 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":58,"completed":13,"skipped":2205,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:20:05.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 Mar 25 11:20:06.191: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f" in namespace "security-context-test-3394" to be "Succeeded or Failed" Mar 25 11:20:07.087: INFO: Pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 895.91872ms Mar 25 11:20:09.342: INFO: Pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.150358273s Mar 25 11:20:11.440: INFO: Pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.248795205s Mar 25 11:20:13.446: INFO: Pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.254391225s Mar 25 11:20:15.512: INFO: Pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.320646438s Mar 25 11:20:15.512: INFO: Pod "alpine-nnp-nil-eca57825-e57a-4609-8336-d86914835b7f" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:20:15.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3394" for this suite. • [SLOW TEST:10.656 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:335 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":58,"completed":14,"skipped":2574,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:20:15.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 STEP: Creating pod liveness-f5ed73bc-9f78-4c66-a061-4cfc404ff618 in namespace container-probe-2270 Mar 25 11:20:24.770: INFO: Started pod liveness-f5ed73bc-9f78-4c66-a061-4cfc404ff618 in namespace container-probe-2270 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 11:20:25.021: INFO: Initial restart count of pod liveness-f5ed73bc-9f78-4c66-a061-4cfc404ff618 is 0 Mar 25 11:20:48.120: INFO: Restart count of pod container-probe-2270/liveness-f5ed73bc-9f78-4c66-a061-4cfc404ff618 is now 1 (23.099246635s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:20:49.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2270" for this suite. • [SLOW TEST:34.139 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:250 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a local redirect http liveness probe","total":58,"completed":15,"skipped":2629,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:20:49.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 STEP: Creating pod startup-d58c9354-9c32-411f-a68f-a4bf79bba5e9 in namespace container-probe-319 Mar 25 11:20:59.985: INFO: Started pod startup-d58c9354-9c32-411f-a68f-a4bf79bba5e9 in namespace container-probe-319 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 11:21:00.012: INFO: Initial restart count of pod startup-d58c9354-9c32-411f-a68f-a4bf79bba5e9 is 0 Mar 25 11:22:04.333: INFO: Restart count of pod container-probe-319/startup-d58c9354-9c32-411f-a68f-a4bf79bba5e9 is now 1 (1m4.320720426s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:22:05.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-319" for this suite. • [SLOW TEST:75.909 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted by liveness probe after startup probe enables it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:347 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted by liveness probe after startup probe enables it","total":58,"completed":16,"skipped":2711,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:22:05.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node STEP: Watching for error events or started pod STEP: Checking that the pod was rejected [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:22:10.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8430" for this suite. • [SLOW TEST:5.579 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not launch unsafe, but not explicitly enabled sysctls on the node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:183 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":58,"completed":17,"skipped":2787,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:22:11.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 Mar 25 11:22:11.859: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:22:13.902: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:22:15.884: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:22:17.962: INFO: The status of Pod back-off-cap is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:22:19.889: INFO: The status of Pod back-off-cap is Running (Ready = true) STEP: getting restart delay when capped Mar 25 11:33:43.418: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-03-25 11:28:35 +0000 UTC restartedAt=2021-03-25 11:33:42 +0000 UTC (5m7s) Mar 25 11:39:11.882: INFO: getRestartDelay: restartCount = 8, finishedAt=2021-03-25 11:33:47 +0000 UTC restartedAt=2021-03-25 11:39:05 +0000 UTC (5m18s) Mar 25 11:44:19.753: INFO: getRestartDelay: restartCount = 9, finishedAt=2021-03-25 11:39:10 +0000 UTC restartedAt=2021-03-25 11:44:18 +0000 UTC (5m8s) STEP: getting restart delay after a capped delay Mar 25 11:49:39.080: INFO: getRestartDelay: restartCount = 10, finishedAt=2021-03-25 11:44:23 +0000 UTC restartedAt=2021-03-25 11:49:38 +0000 UTC (5m15s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:49:39.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-708" for this suite. • [SLOW TEST:1647.922 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should cap back-off at MaxContainerBackOff [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:723 ------------------------------ {"msg":"PASSED [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]","total":58,"completed":18,"skipped":2857,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:49:39.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 11:49:39.861: INFO: Waiting up to 5m0s for pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2" in namespace "security-context-635" to be "Succeeded or Failed" Mar 25 11:49:39.984: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2": Phase="Pending", Reason="", readiness=false. Elapsed: 123.579876ms Mar 25 11:49:42.549: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.688632685s Mar 25 11:49:45.094: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.233183604s Mar 25 11:49:47.108: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.246820267s Mar 25 11:49:49.305: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2": Phase="Running", Reason="", readiness=true. Elapsed: 9.444376672s Mar 25 11:49:51.457: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.596682808s STEP: Saw pod success Mar 25 11:49:51.457: INFO: Pod "security-context-2522003c-ba20-4ef0-8831-a6940d5507e2" satisfied condition "Succeeded or Failed" Mar 25 11:49:51.613: INFO: Trying to get logs from node latest-worker pod security-context-2522003c-ba20-4ef0-8831-a6940d5507e2 container test-container: STEP: delete the pod Mar 25 11:49:52.097: INFO: Waiting for pod security-context-2522003c-ba20-4ef0-8831-a6940d5507e2 to disappear Mar 25 11:49:52.188: INFO: Pod security-context-2522003c-ba20-4ef0-8831-a6940d5507e2 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:49:52.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-635" for this suite. • [SLOW TEST:13.162 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the container [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:161 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]","total":58,"completed":19,"skipped":2881,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSS ------------------------------ [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:49:52.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Mar 25 11:49:53.314: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:49:53.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-1552" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.137 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 should enforce an AppArmor profile [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:43 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:275 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:49:54.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 STEP: creating the pod Mar 25 11:49:55.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1928 create -f -' Mar 25 11:50:12.846: INFO: stderr: "" Mar 25 11:50:12.846: INFO: stdout: "pod/dapi-test-pod created\n" STEP: checking if name and namespace were passed correctly Mar 25 11:50:23.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1928 logs dapi-test-pod test-container' Mar 25 11:50:23.992: INFO: stderr: "" Mar 25 11:50:23.992: INFO: stdout: "KUBERNETES_PORT=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-1928\nMY_POD_IP=10.244.2.136\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.17\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" Mar 25 11:50:23.992: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1928 logs dapi-test-pod test-container' Mar 25 11:50:24.964: INFO: stderr: "" Mar 25 11:50:24.964: INFO: stdout: "KUBERNETES_PORT=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT=443\nHOSTNAME=dapi-test-pod\nSHLVL=1\nHOME=/root\nMY_POD_NAMESPACE=examples-1928\nMY_POD_IP=10.244.2.136\nKUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\nPATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\nKUBERNETES_PORT_443_TCP_PORT=443\nKUBERNETES_PORT_443_TCP_PROTO=tcp\nMY_HOST_IP=172.18.0.17\nKUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\nKUBERNETES_SERVICE_PORT_HTTPS=443\nKUBERNETES_SERVICE_HOST=10.96.0.1\nPWD=/\nMY_POD_NAME=dapi-test-pod\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:24.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1928" for this suite. • [SLOW TEST:30.877 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:133 should create a pod that prints his name and namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:134 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace","total":58,"completed":20,"skipped":2933,"failed":1,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeProblemDetector should run without error /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:25.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-problem-detector STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:52 Mar 25 11:50:26.399: INFO: No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' [AfterEach] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:50:26.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-problem-detector-1942" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [1.419 seconds] [sig-node] NodeProblemDetector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should run without error [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:60 No SSH Key for provider local: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory'' /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/node_problem_detector.go:53 ------------------------------ SSSS ------------------------------ [sig-node] Probing container should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:50:26.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 Mar 25 11:50:58.560: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:00.627: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:02.629: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:06.023: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:07.632: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:09.758: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:12.247: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:12.980: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:14.777: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:16.693: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:18.872: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = false) Mar 25 11:51:21.291: INFO: The status of Pod startup-f185022c-971e-4865-9f96-6a9e947be5e8 is Running (Ready = true) Mar 25 11:51:22.672: INFO: Container started at 2021-03-25 11:50:57.694264822 +0000 UTC m=+2497.110481644, pod became ready at 2021-03-25 11:51:21.291276639 +0000 UTC m=+2520.707493403, 23.597011759s after startupProbe succeeded Mar 25 11:51:22.672: FAIL: Pod became ready in 23.597011759s, more than 5s after startupProbe succeeded. It means that the delay readiness probes were not initiated immediately after startup finished. Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0031ba900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0031ba900) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0031ba900, 0x6d60740) /usr/local/go/src/testing/testing.go:1194 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1239 +0x2b3 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "container-probe-2261". STEP: Found 5 events. Mar 25 11:51:23.041: INFO: At 2021-03-25 11:50:28 +0000 UTC - event for startup-f185022c-971e-4865-9f96-6a9e947be5e8: {default-scheduler } Scheduled: Successfully assigned container-probe-2261/startup-f185022c-971e-4865-9f96-6a9e947be5e8 to latest-worker2 Mar 25 11:51:23.041: INFO: At 2021-03-25 11:50:31 +0000 UTC - event for startup-f185022c-971e-4865-9f96-6a9e947be5e8: {kubelet latest-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29" already present on machine Mar 25 11:51:23.041: INFO: At 2021-03-25 11:50:36 +0000 UTC - event for startup-f185022c-971e-4865-9f96-6a9e947be5e8: {kubelet latest-worker2} Created: Created container busybox Mar 25 11:51:23.041: INFO: At 2021-03-25 11:50:36 +0000 UTC - event for startup-f185022c-971e-4865-9f96-6a9e947be5e8: {kubelet latest-worker2} Started: Started container busybox Mar 25 11:51:23.041: INFO: At 2021-03-25 11:50:44 +0000 UTC - event for startup-f185022c-971e-4865-9f96-6a9e947be5e8: {kubelet latest-worker2} Unhealthy: Startup probe failed: cat: can't open '/tmp/startup': No such file or directory Mar 25 11:51:24.047: INFO: POD NODE PHASE GRACE CONDITIONS Mar 25 11:51:24.047: INFO: startup-f185022c-971e-4865-9f96-6a9e947be5e8 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 11:50:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 11:51:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 11:51:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-03-25 11:50:28 +0000 UTC }] Mar 25 11:51:24.047: INFO: Mar 25 11:51:24.277: INFO: Logging node info for node latest-control-plane Mar 25 11:51:24.829: INFO: Node Info: &Node{ObjectMeta:{latest-control-plane cc9ffc7a-24ee-4720-b82b-ca49361a1767 1121004 0 2021-03-22 08:06:26 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2021-03-22 08:06:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}} {kubeadm Update v1 2021-03-22 08:06:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{}}}}} {kube-controller-manager Update v1 2021-03-22 08:06:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:48:52 +0000 UTC,LastTransitionTime:2021-03-22 08:06:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.16,},NodeAddress{Type:Hostname,Address:latest-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7ddc81afc45247dcbfc9057854ace76d,SystemUUID:bb656e9a-07dd-4f2a-b240-e40b62fcf128,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:51:24.830: INFO: Logging kubelet events for node latest-control-plane Mar 25 11:51:25.234: INFO: Logging pods the kubelet thinks is on node latest-control-plane Mar 25 11:51:25.297: INFO: etcd-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container etcd ready: true, restart count 0 Mar 25 11:51:25.297: INFO: kube-controller-manager-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 25 11:51:25.297: INFO: kindnet-f7lbb started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:51:25.297: INFO: kube-proxy-vs4qz started at 2021-03-22 08:06:44 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:51:25.297: INFO: coredns-74ff55c5b-nh9lj started at 2021-03-25 11:13:01 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container coredns ready: true, restart count 0 Mar 25 11:51:25.297: INFO: coredns-74ff55c5b-zfkjb started at 2021-03-25 11:13:02 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container coredns ready: true, restart count 0 Mar 25 11:51:25.297: INFO: kube-apiserver-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container kube-apiserver ready: true, restart count 0 Mar 25 11:51:25.297: INFO: kube-scheduler-latest-control-plane started at 2021-03-22 08:06:37 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container kube-scheduler ready: true, restart count 0 Mar 25 11:51:25.297: INFO: local-path-provisioner-8b46957d4-mm6wg started at 2021-03-22 08:07:00 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:25.297: INFO: Container local-path-provisioner ready: true, restart count 0 W0325 11:51:25.664070 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:51:26.183: INFO: Latency metrics for node latest-control-plane Mar 25 11:51:26.183: INFO: Logging node info for node latest-worker Mar 25 11:51:26.445: INFO: Node Info: &Node{ObjectMeta:{latest-worker d799492c-1b1f-4258-b431-31204511a98f 1122157 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-218":"csi-mock-csi-mock-volumes-218","csi-mock-csi-mock-volumes-2733":"csi-mock-csi-mock-volumes-2733","csi-mock-csi-mock-volumes-3982":"csi-mock-csi-mock-volumes-3982","csi-mock-csi-mock-volumes-4129":"csi-mock-csi-mock-volumes-4129","csi-mock-csi-mock-volumes-4395":"csi-mock-csi-mock-volumes-4395","csi-mock-csi-mock-volumes-5145":"csi-mock-csi-mock-volumes-5145","csi-mock-csi-mock-volumes-6281":"csi-mock-csi-mock-volumes-6281","csi-mock-csi-mock-volumes-8884":"csi-mock-csi-mock-volumes-8884"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-24 20:49:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:07:16 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.17,},NodeAddress{Type:Hostname,Address:latest-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4be5fb85644b44b5b165e551ded370d1,SystemUUID:55469ec9-514f-495b-b880-812c90367461,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 docker.io/gluster/glusterdynamic-provisioner:v1.0],SizeBytes:111200078,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:abe8cef9e116f2d5ec1175c386e33841ff3386779138b425af876384b0fd7ccb k8s.gcr.io/build-image/debian-iptables:buster-v1.5.0],SizeBytes:38088315,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:4c9410a4ee555dcb0e8b7bd6fc77c65ac400f7c5bd4555df8187630efaea6ea4 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0],SizeBytes:37934917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 docker.io/aquasec/kube-bench:0.3.1],SizeBytes:8042926,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:51:26.445: INFO: Logging kubelet events for node latest-worker Mar 25 11:51:26.769: INFO: Logging pods the kubelet thinks is on node latest-worker Mar 25 11:51:26.923: INFO: test-recreate-deployment-546b5fd69c-82rlw started at 2021-03-25 11:51:18 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:26.923: INFO: Container agnhost ready: false, restart count 0 Mar 25 11:51:26.923: INFO: pod-553969b4-4a26-4ebc-ab39-18e5caee398d started at 2021-03-25 11:51:19 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:26.923: INFO: Container write-pod ready: false, restart count 0 Mar 25 11:51:26.923: INFO: pod-41785dce-dc0b-405a-a87e-b92609710c7c started at 2021-03-25 11:51:07 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:26.923: INFO: Container write-pod ready: true, restart count 0 Mar 25 11:51:26.923: INFO: kube-proxy-kjrrj started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:26.923: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:51:26.923: INFO: hostexec-latest-worker-tqx5r started at 2021-03-25 11:50:53 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:26.923: INFO: Container agnhost-container ready: true, restart count 0 Mar 25 11:51:26.923: INFO: kindnet-bpcmh started at 2021-03-25 11:19:46 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:26.923: INFO: Container kindnet-cni ready: true, restart count 0 W0325 11:51:26.986321 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:51:27.263: INFO: Latency metrics for node latest-worker Mar 25 11:51:27.263: INFO: Logging node info for node latest-worker2 Mar 25 11:51:27.381: INFO: Node Info: &Node{ObjectMeta:{latest-worker2 525d2fa2-95f1-4436-b726-c3866136dd3a 1120532 0 2021-03-22 08:06:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:latest-worker2 kubernetes.io/os:linux] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1087":"csi-mock-csi-mock-volumes-1087","csi-mock-csi-mock-volumes-1305":"csi-mock-csi-mock-volumes-1305","csi-mock-csi-mock-volumes-1436":"csi-mock-csi-mock-volumes-1436","csi-mock-csi-mock-volumes-4385":"csi-mock-csi-mock-volumes-4385","csi-mock-csi-mock-volumes-5253":"csi-mock-csi-mock-volumes-5253","csi-mock-csi-mock-volumes-5595":"csi-mock-csi-mock-volumes-5595","csi-mock-csi-mock-volumes-6229":"csi-mock-csi-mock-volumes-6229","csi-mock-csi-mock-volumes-6949":"csi-mock-csi-mock-volumes-6949","csi-mock-csi-mock-volumes-7130":"csi-mock-csi-mock-volumes-7130","csi-mock-csi-mock-volumes-7225":"csi-mock-csi-mock-volumes-7225","csi-mock-csi-mock-volumes-8538":"csi-mock-csi-mock-volumes-8538","csi-mock-csi-mock-volumes-9682":"csi-mock-csi-mock-volumes-9682","csi-mock-csi-mock-volumes-9809":"csi-mock-csi-mock-volumes-9809"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2021-03-22 08:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {kube-controller-manager Update v1 2021-03-25 11:41:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}} {e2e.test Update v1 2021-03-25 11:41:34 +0000 UTC FieldsV1 {"f:status":{"f:capacity":{"f:example.com/fakecpu":{}}}}} {kubelet Update v1 2021-03-25 11:43:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:csi.volume.kubernetes.io/nodeid":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:io.kubernetes.storage.mock/node":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}},"f:status":{"f:addresses":{".":{},"k:{\"type\":\"Hostname\"}":{".":{},"f:address":{},"f:type":{}},"k:{\"type\":\"InternalIP\"}":{".":{},"f:address":{},"f:type":{}}},"f:allocatable":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:example.com/fakecpu":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:capacity":{".":{},"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-1Gi":{},"f:hugepages-2Mi":{},"f:memory":{},"f:pods":{}},"f:conditions":{".":{},"k:{\"type\":\"DiskPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"MemoryPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"PIDPressure\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:daemonEndpoints":{"f:kubeletEndpoint":{"f:Port":{}}},"f:images":{},"f:nodeInfo":{"f:architecture":{},"f:bootID":{},"f:containerRuntimeVersion":{},"f:kernelVersion":{},"f:kubeProxyVersion":{},"f:kubeletVersion":{},"f:machineID":{},"f:operatingSystem":{},"f:osImage":{},"f:systemUUID":{}}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/latest/latest-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},example.com/fakecpu: {{1 3} {} 1k DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922104832 0} {} 131759868Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:06:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-03-25 11:48:02 +0000 UTC,LastTransitionTime:2021-03-22 08:07:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.15,},NodeAddress{Type:Hostname,Address:latest-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:cf016e12ad1c42869781f444437713bb,SystemUUID:796f762a-f81b-4766-9835-b125da6d5224,BootID:b267d78b-f69b-4338-80e8-3f4944338e5d,KernelVersion:4.15.0-118-generic,OSImage:Ubuntu 20.04 LTS,ContainerRuntimeVersion:containerd://1.3.3-14-g449e9269,KubeletVersion:v1.21.0-alpha.0,KubeProxyVersion:v1.21.0-alpha.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.13-0],SizeBytes:254659261,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.21.0-alpha.0],SizeBytes:136866158,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 docker.io/aquasec/kube-hunter:0.3.1],SizeBytes:124684106,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.21.0-alpha.0],SizeBytes:95511852,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.0],SizeBytes:88147262,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.21.0-alpha.0],SizeBytes:66088748,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:a3f7549ea04c419276e8b84e90a515bbce5bc8a057be2ed974ec45492eca346e k8s.gcr.io/e2e-test-images/agnhost:2.26],SizeBytes:49216572,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706 k8s.gcr.io/e2e-test-images/agnhost:2.28],SizeBytes:49210832,},ContainerImage{Names:[k8s.gcr.io/coredns:1.7.0],SizeBytes:45355487,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:20c828075d1e36f679d6a91e905b0927141eef5e15be0c9a1ca4a6a0ed9313d2 k8s.gcr.io/sig-storage/csi-provisioner:v2.1.0],SizeBytes:21205045,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679 k8s.gcr.io/sig-storage/csi-provisioner:v1.6.0],SizeBytes:19408504,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab k8s.gcr.io/sig-storage/csi-attacher:v2.2.0],SizeBytes:18451536,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c k8s.gcr.io/sig-storage/csi-resizer:v0.5.0],SizeBytes:18412631,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v1.3.0],SizeBytes:7717137,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:8e977ceafa6261c6677a9d9d84deeb7d5ef34a4bdee128814072be1fe9d92c9f k8s.gcr.io/sig-storage/mock-driver:v4.0.2],SizeBytes:7458549,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb docker.io/appropriate/curl:edge],SizeBytes:2854657,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/busybox:latest],SizeBytes:1144547,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7 k8s.gcr.io/e2e-test-images/busybox:1.29],SizeBytes:732569,},ContainerImage{Names:[docker.io/library/busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 docker.io/library/busybox:1.28],SizeBytes:727869,},ContainerImage{Names:[k8s.gcr.io/pause:3.2],SizeBytes:685724,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:301268,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:299480,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 25 11:51:27.381: INFO: Logging kubelet events for node latest-worker2 Mar 25 11:51:27.391: INFO: Logging pods the kubelet thinks is on node latest-worker2 Mar 25 11:51:27.863: INFO: kindnet-7xphn started at 2021-03-24 20:36:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:27.863: INFO: Container kindnet-cni ready: true, restart count 0 Mar 25 11:51:27.863: INFO: pvc-volume-tester-gqglb started at 2021-03-24 09:41:54 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:27.863: INFO: Container volume-tester ready: false, restart count 0 Mar 25 11:51:27.863: INFO: kube-proxy-dv4wd started at 2021-03-22 08:06:55 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:27.863: INFO: Container kube-proxy ready: true, restart count 0 Mar 25 11:51:27.863: INFO: busybox-privileged-false-86db3570-92a2-48a7-aed9-3f80c412352f started at 2021-03-25 11:51:17 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:27.863: INFO: Container busybox-privileged-false-86db3570-92a2-48a7-aed9-3f80c412352f ready: false, restart count 0 Mar 25 11:51:27.863: INFO: startup-f185022c-971e-4865-9f96-6a9e947be5e8 started at 2021-03-25 11:50:28 +0000 UTC (0+1 container statuses recorded) Mar 25 11:51:27.863: INFO: Container busybox ready: true, restart count 0 W0325 11:51:27.910319 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Mar 25 11:51:28.255: INFO: Latency metrics for node latest-worker2 Mar 25 11:51:28.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2261" for this suite. • Failure [62.022 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be ready immediately after startupProbe succeeds [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:376 Mar 25 11:51:22.672: Pod became ready in 23.597011759s, more than 5s after startupProbe succeeded. It means that the delay readiness probes were not initiated immediately after startup finished. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-node] Probing container should be ready immediately after startupProbe succeeds","total":58,"completed":20,"skipped":2995,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:51:28.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 Mar 25 11:51:29.872: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-457" to be "Succeeded or Failed" Mar 25 11:51:29.951: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 79.390784ms Mar 25 11:51:32.744: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.872606543s Mar 25 11:51:34.979: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 5.107556868s Mar 25 11:51:38.989: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 9.117014938s Mar 25 11:51:41.035: INFO: Pod "explicit-nonroot-uid": Phase="Running", Reason="", readiness=true. Elapsed: 11.163433934s Mar 25 11:51:43.579: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.70749858s Mar 25 11:51:43.579: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:51:43.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-457" for this suite. • [SLOW TEST:16.904 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should run with an explicit non-root user ID [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:129 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":58,"completed":21,"skipped":3236,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Delete Grace Period should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:51:45.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 25 11:52:03.355: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:03.355410094 +0000 UTC m=+2562.771626866, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:08.752: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:08.752314884 +0000 UTC m=+2568.168531676, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:13.474: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:13.474107645 +0000 UTC m=+2572.890324422, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:18.332: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:18.332979345 +0000 UTC m=+2577.749196135, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:23.454: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:23.454502624 +0000 UTC m=+2582.870719435, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:28.545: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:28.545644474 +0000 UTC m=+2587.961861323, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:33.823: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:33.823129134 +0000 UTC m=+2593.239345960, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:38.505: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:38.505344336 +0000 UTC m=+2597.921561123, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:43.409: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:43.409654024 +0000 UTC m=+2602.825870841, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:48.309: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:48.30938544 +0000 UTC m=+2607.725602217, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:53.892: INFO: start=2021-03-25 11:51:58.250213691 +0000 UTC m=+2557.666430605, now=2021-03-25 11:52:53.892109029 +0000 UTC m=+2613.308326010, kubelet pod: {"metadata":{"name":"pod-submit-remove-d55148c7-e5af-4859-b500-1d35c9702515","namespace":"pods-2172","uid":"88a66737-78a0-4cd5-b196-b9427a5cfc39","resourceVersion":"1123107","creationTimestamp":"2021-03-25T11:51:48Z","deletionTimestamp":"2021-03-25T11:52:28Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"798565344"},"annotations":{"kubernetes.io/config.seen":"2021-03-25T11:51:48.811303016Z","kubernetes.io/config.source":"api"},"managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"v1","time":"2021-03-25T11:51:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"default-token-rxbz4","secret":{"secretName":"default-token-rxbz4","defaultMode":420}}],"containers":[{"name":"agnhost-container","image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-rxbz4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{}}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"latest-worker","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:52:00Z","reason":"ContainersNotReady","message":"containers with unready status: [agnhost-container]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-03-25T11:51:48Z"}],"hostIP":"172.18.0.17","podIP":"10.244.2.146","podIPs":[{"ip":"10.244.2.146"}],"startTime":"2021-03-25T11:51:48Z","containerStatuses":[{"name":"agnhost-container","state":{"waiting":{"reason":"ContainerCreating"}},"lastState":{"terminated":{"exitCode":137,"reason":"ContainerStatusUnknown","message":"The container could not be located when the pod was deleted. The container used to be Running","startedAt":null,"finishedAt":null}},"ready":false,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/agnhost:2.28","imageID":"","started":false}],"qosClass":"BestEffort"}} Mar 25 11:52:58.998: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:52:59.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2172" for this suite. • [SLOW TEST:75.041 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:51 should be submitted and removed /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:62 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Delete Grace Period should be submitted and removed","total":58,"completed":22,"skipped":3294,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:53:00.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 25 11:53:03.684: INFO: Waiting up to 5m0s for pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0" in namespace "security-context-4415" to be "Succeeded or Failed" Mar 25 11:53:04.148: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0": Phase="Pending", Reason="", readiness=false. Elapsed: 464.151222ms Mar 25 11:53:07.517: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.833673643s Mar 25 11:53:10.074: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.390285024s Mar 25 11:53:12.951: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.267282133s Mar 25 11:53:15.328: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.644461656s Mar 25 11:53:17.437: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.75329563s STEP: Saw pod success Mar 25 11:53:17.437: INFO: Pod "security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0" satisfied condition "Succeeded or Failed" Mar 25 11:53:17.598: INFO: Trying to get logs from node latest-worker pod security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0 container test-container: STEP: delete the pod Mar 25 11:53:20.191: INFO: Waiting for pod security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0 to disappear Mar 25 11:53:20.552: INFO: Pod security-context-d9d4cdad-60df-491c-90b6-9cdbd51383c0 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:53:20.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4415" for this suite. • [SLOW TEST:21.166 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:77 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":58,"completed":23,"skipped":3386,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:53:21.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:53:41.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7458" for this suite. • [SLOW TEST:20.097 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should be able to pull image [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:382 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]","total":58,"completed":24,"skipped":3444,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:53:42.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 STEP: Creating pod liveness-9027f873-6bd5-4345-9099-783661dc4ec2 in namespace container-probe-7030 Mar 25 11:53:52.224: INFO: Started pod liveness-9027f873-6bd5-4345-9099-783661dc4ec2 in namespace container-probe-7030 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 11:53:52.270: INFO: Initial restart count of pod liveness-9027f873-6bd5-4345-9099-783661dc4ec2 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:57:55.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7030" for this suite. • [SLOW TEST:254.527 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a non-local redirect http liveness probe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:265 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe","total":58,"completed":25,"skipped":3510,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:57:56.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Mar 25 11:57:58.172: INFO: Waiting up to 5m0s for pod "security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda" in namespace "security-context-6798" to be "Succeeded or Failed" Mar 25 11:57:58.182: INFO: Pod "security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda": Phase="Pending", Reason="", readiness=false. Elapsed: 10.035038ms Mar 25 11:58:00.300: INFO: Pod "security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128186456s Mar 25 11:58:02.348: INFO: Pod "security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176639334s Mar 25 11:58:04.564: INFO: Pod "security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.392458043s STEP: Saw pod success Mar 25 11:58:04.564: INFO: Pod "security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda" satisfied condition "Succeeded or Failed" Mar 25 11:58:04.779: INFO: Trying to get logs from node latest-worker pod security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda container test-container: STEP: delete the pod Mar 25 11:58:05.717: INFO: Waiting for pod security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda to disappear Mar 25 11:58:05.857: INFO: Pod security-context-1d71eca7-93e8-4ae7-86b1-740c5052cdda no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:58:05.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6798" for this suite. • [SLOW TEST:10.047 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support container.SecurityContext.RunAsUser [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:109 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":58,"completed":26,"skipped":3595,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSS ------------------------------ [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:58:06.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 Mar 25 11:58:07.735: INFO: Waiting up to 5m0s for pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935" in namespace "security-context-test-9082" to be "Succeeded or Failed" Mar 25 11:58:07.822: INFO: Pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935": Phase="Pending", Reason="", readiness=false. Elapsed: 86.135366ms Mar 25 11:58:09.853: INFO: Pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117617642s Mar 25 11:58:12.006: INFO: Pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270791327s Mar 25 11:58:14.398: INFO: Pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935": Phase="Pending", Reason="", readiness=false. Elapsed: 6.662186833s Mar 25 11:58:16.426: INFO: Pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.690250045s Mar 25 11:58:16.426: INFO: Pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935" satisfied condition "Succeeded or Failed" Mar 25 11:58:16.707: INFO: Got logs for pod "busybox-privileged-true-34c3cad9-c24c-469c-9e72-67acce2c1935": "" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 11:58:16.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9082" for this suite. • [SLOW TEST:10.245 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with privileged /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:232 should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:282 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":58,"completed":27,"skipped":3600,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 11:58:16.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 Mar 25 11:58:18.144: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:58:20.272: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:58:22.180: INFO: The status of Pod pod-back-off-image is Pending, waiting for it to be Running (with Ready = true) Mar 25 11:58:24.338: INFO: The status of Pod pod-back-off-image is Running (Ready = true) STEP: getting restart delay-0 Mar 25 11:59:37.579: INFO: getRestartDelay: restartCount = 3, finishedAt=2021-03-25 11:59:02 +0000 UTC restartedAt=2021-03-25 11:59:35 +0000 UTC (33s) STEP: getting restart delay-1 Mar 25 12:00:35.547: INFO: getRestartDelay: restartCount = 4, finishedAt=2021-03-25 11:59:40 +0000 UTC restartedAt=2021-03-25 12:00:32 +0000 UTC (52s) STEP: getting restart delay-2 Mar 25 12:02:14.941: INFO: getRestartDelay: restartCount = 5, finishedAt=2021-03-25 12:00:37 +0000 UTC restartedAt=2021-03-25 12:02:06 +0000 UTC (1m29s) STEP: updating the image Mar 25 12:02:16.708: INFO: Successfully updated pod "pod-back-off-image" STEP: get restart delay after image update Mar 25 12:02:56.333: INFO: getRestartDelay: restartCount = 7, finishedAt=2021-03-25 12:02:28 +0000 UTC restartedAt=2021-03-25 12:02:51 +0000 UTC (23s) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:02:56.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2077" for this suite. • [SLOW TEST:280.931 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:682 ------------------------------ {"msg":"PASSED [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]","total":58,"completed":28,"skipped":3703,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] [Feature:Example] Secret should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:02:57.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename examples STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:50 [It] should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 STEP: creating secret and pod Mar 25 12:03:00.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1457 create -f -' Mar 25 12:03:34.664: INFO: stderr: "" Mar 25 12:03:34.664: INFO: stdout: "secret/test-secret created\n" Mar 25 12:03:34.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1457 create -f -' Mar 25 12:03:35.103: INFO: stderr: "" Mar 25 12:03:35.103: INFO: stdout: "pod/secret-test-pod created\n" STEP: checking if secret was read correctly Mar 25 12:03:56.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:45565 --kubeconfig=/root/.kube/config --namespace=examples-1457 logs secret-test-pod test-container' Mar 25 12:03:57.950: INFO: stderr: "" Mar 25 12:03:57.950: INFO: stdout: "content of file \"/etc/secret-volume/data-1\": value-1\n\n" [AfterEach] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:03:57.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "examples-1457" for this suite. • [SLOW TEST:61.766 seconds] [sig-node] [Feature:Example] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:113 should create a pod that reads a secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/examples.go:114 ------------------------------ {"msg":"PASSED [sig-node] [Feature:Example] Secret should create a pod that reads a secret","total":58,"completed":29,"skipped":3770,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:148 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:03:59.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:148 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:04:01.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-3279" for this suite. •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":58,"completed":30,"skipped":3824,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:04:02.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 12:04:06.608: INFO: Waiting up to 5m0s for pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba" in namespace "security-context-6279" to be "Succeeded or Failed" Mar 25 12:04:07.160: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba": Phase="Pending", Reason="", readiness=false. Elapsed: 552.363476ms Mar 25 12:04:09.619: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.01076461s Mar 25 12:04:12.202: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba": Phase="Pending", Reason="", readiness=false. Elapsed: 5.594034323s Mar 25 12:04:14.551: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba": Phase="Pending", Reason="", readiness=false. Elapsed: 7.94282123s Mar 25 12:04:16.776: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba": Phase="Running", Reason="", readiness=true. Elapsed: 10.167741217s Mar 25 12:04:19.177: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.569537127s STEP: Saw pod success Mar 25 12:04:19.177: INFO: Pod "security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba" satisfied condition "Succeeded or Failed" Mar 25 12:04:19.358: INFO: Trying to get logs from node latest-worker2 pod security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba container test-container: STEP: delete the pod Mar 25 12:04:19.807: INFO: Waiting for pod security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba to disappear Mar 25 12:04:20.699: INFO: Pod security-context-6d2fad23-849d-4c7f-b34d-b97c42fadcba no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:04:20.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-6279" for this suite. • [SLOW TEST:18.425 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp default which is unconfined [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:183 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":58,"completed":31,"skipped":3854,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:04:21.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 STEP: Creating pod startup-0e002695-bcbf-4ad0-9857-384a7b2c8fab in namespace container-probe-3785 Mar 25 12:04:30.524: INFO: Started pod startup-0e002695-bcbf-4ad0-9857-384a7b2c8fab in namespace container-probe-3785 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 12:04:30.691: INFO: Initial restart count of pod startup-0e002695-bcbf-4ad0-9857-384a7b2c8fab is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:08:30.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3785" for this suite. • [SLOW TEST:249.781 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:318 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it","total":58,"completed":32,"skipped":4004,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:08:30.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-multiple-pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 Mar 25 12:08:31.273: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 12:09:31.295: INFO: Waiting for terminating namespaces to be deleted... [It] only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 Mar 25 12:09:31.578: INFO: Starting informer... STEP: Starting pods... Mar 25 12:09:32.744: INFO: Pod1 is running on latest-worker. Tainting Node Mar 25 12:09:32.955: INFO: Pod2 is running on latest-worker2. Tainting Node STEP: Trying to apply a taint on the Nodes STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 to be deleted Mar 25 12:09:57.317: INFO: Noticed Pod "taint-eviction-a1" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:10:42.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-5498" for this suite. • [SLOW TEST:132.611 seconds] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 only evicts pods without tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:358 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes","total":58,"completed":33,"skipped":4085,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Mount propagation should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 [BeforeEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:10:43.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename mount-propagation STEP: Waiting for a default service account to be provisioned in namespace [It] should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 Mar 25 12:10:44.053: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:47.726: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:49.100: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:51.359: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:52.563: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:55.523: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:56.988: INFO: The status of Pod master is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:10:58.077: INFO: The status of Pod master is Running (Ready = true) Mar 25 12:10:59.719: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:02.162: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:04.618: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:06.134: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:07.743: INFO: The status of Pod slave is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:09.921: INFO: The status of Pod slave is Running (Ready = true) Mar 25 12:11:10.227: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:12.311: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:15.132: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:16.332: INFO: The status of Pod private is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:18.341: INFO: The status of Pod private is Running (Ready = true) Mar 25 12:11:20.254: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:22.280: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:24.906: INFO: The status of Pod default is Pending, waiting for it to be Running (with Ready = true) Mar 25 12:11:26.687: INFO: The status of Pod default is Running (Ready = true) Mar 25 12:11:26.738: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:26.738: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:27.076: INFO: Exec stderr: "" Mar 25 12:11:27.630: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:27.630: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:28.055: INFO: Exec stderr: "" Mar 25 12:11:28.909: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:28.909: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:29.324: INFO: Exec stderr: "" Mar 25 12:11:29.484: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:29.484: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:30.597: INFO: Exec stderr: "" Mar 25 12:11:31.296: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:31.296: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:31.980: INFO: Exec stderr: "" Mar 25 12:11:32.281: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:32.281: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:32.592: INFO: Exec stderr: "" Mar 25 12:11:32.673: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:32.673: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:33.177: INFO: Exec stderr: "" Mar 25 12:11:33.424: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:33.424: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:34.118: INFO: Exec stderr: "" Mar 25 12:11:34.234: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:34.235: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:34.396: INFO: Exec stderr: "" Mar 25 12:11:34.508: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:34.508: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:35.738: INFO: Exec stderr: "" Mar 25 12:11:35.845: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:35.845: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:36.048: INFO: Exec stderr: "" Mar 25 12:11:36.109: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:36.109: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:36.709: INFO: Exec stderr: "" Mar 25 12:11:36.786: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/master] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:36.786: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:37.250: INFO: Exec stderr: "" Mar 25 12:11:37.955: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/slave] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:37.955: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:39.291: INFO: Exec stderr: "" Mar 25 12:11:40.017: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/private] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:40.018: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:40.986: INFO: Exec stderr: "" Mar 25 12:11:41.297: INFO: ExecWithOptions {Command:[/bin/sh -c test -d /mnt/test/default] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:41.297: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:41.758: INFO: Exec stderr: "" Mar 25 12:11:41.901: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-master /mnt/test/master; echo master > /mnt/test/master/file] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:41.901: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:42.760: INFO: Exec stderr: "" Mar 25 12:11:42.937: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-slave /mnt/test/slave; echo slave > /mnt/test/slave/file] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:42.937: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:43.146: INFO: Exec stderr: "" Mar 25 12:11:43.270: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-private /mnt/test/private; echo private > /mnt/test/private/file] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:43.270: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:43.726: INFO: Exec stderr: "" Mar 25 12:11:43.880: INFO: ExecWithOptions {Command:[/bin/sh -c mount -t tmpfs e2e-mount-propagation-default /mnt/test/default; echo default > /mnt/test/default/file] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:43.880: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:44.338: INFO: Exec stderr: "" Mar 25 12:11:54.591: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-2272"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-2272"/host; echo host > "/var/lib/kubelet/mount-propagation-2272"/host/file] Namespace:mount-propagation-2272 PodName:hostexec-latest-worker-wvjp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:11:54.591: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:55.030: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:55.030: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:55.308: INFO: pod master mount master: stdout: "master", stderr: "" error: Mar 25 12:11:55.363: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:55.363: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:55.580: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:11:55.695: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:55.695: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:55.994: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:11:56.036: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:56.036: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:56.150: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:11:56.197: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:56.197: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:56.574: INFO: pod master mount host: stdout: "host", stderr: "" error: Mar 25 12:11:57.341: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:57.341: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:11:58.621: INFO: pod slave mount master: stdout: "master", stderr: "" error: Mar 25 12:11:59.395: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:11:59.395: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:00.016: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: Mar 25 12:12:00.347: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:00.347: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:01.246: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:01.317: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:01.317: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:01.494: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:01.532: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:01.532: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:01.753: INFO: pod slave mount host: stdout: "host", stderr: "" error: Mar 25 12:12:01.860: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:01.860: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:02.002: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:02.115: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:02.115: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:02.288: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:02.359: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:02.359: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:02.697: INFO: pod private mount private: stdout: "private", stderr: "" error: Mar 25 12:12:02.753: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:02.753: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:02.929: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:03.047: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:03.047: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:05.000: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:07.055: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:07.055: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:10.164: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:10.730: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:10.730: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:12.797: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:12.849: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:12.849: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:12.975: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:13.018: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:13.018: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:13.297: INFO: pod default mount default: stdout: "default", stderr: "" error: Mar 25 12:12:13.359: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:13.359: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:13.479: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1 Mar 25 12:12:13.479: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-2272"/master/file` = master] Namespace:mount-propagation-2272 PodName:hostexec-latest-worker-wvjp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:12:13.479: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:13.670: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-2272"/slave/file] Namespace:mount-propagation-2272 PodName:hostexec-latest-worker-wvjp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:12:13.670: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:13.824: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-2272"/host] Namespace:mount-propagation-2272 PodName:hostexec-latest-worker-wvjp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:12:13.824: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:14.027: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/default] Namespace:mount-propagation-2272 PodName:default ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:14.027: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:14.423: INFO: Exec stderr: "" Mar 25 12:12:14.515: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/private] Namespace:mount-propagation-2272 PodName:private ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:14.515: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:16.971: INFO: Exec stderr: "" Mar 25 12:12:17.758: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/slave] Namespace:mount-propagation-2272 PodName:slave ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:17.758: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:18.785: INFO: Exec stderr: "" Mar 25 12:12:18.958: INFO: ExecWithOptions {Command:[/bin/sh -c umount /mnt/test/master] Namespace:mount-propagation-2272 PodName:master ContainerName:cntr Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Mar 25 12:12:18.958: INFO: >>> kubeConfig: /root/.kube/config Mar 25 12:12:19.670: INFO: Exec stderr: "" Mar 25 12:12:19.670: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c rm -rf "/var/lib/kubelet/mount-propagation-2272"] Namespace:mount-propagation-2272 PodName:hostexec-latest-worker-wvjp2 ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false} Mar 25 12:12:19.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Deleting pod hostexec-latest-worker-wvjp2 in namespace mount-propagation-2272 [AfterEach] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:12:23.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "mount-propagation-2272" for this suite. • [SLOW TEST:100.829 seconds] [sig-node] Mount propagation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should propagate mounts to the host /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:82 ------------------------------ {"msg":"PASSED [sig-node] Mount propagation should propagate mounts to the host","total":58,"completed":34,"skipped":4149,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:12:24.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:12:36.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-989" for this suite. • [SLOW TEST:12.936 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually whitelisted /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:108 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":58,"completed":35,"skipped":4246,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pod Container Status should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:12:37.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 STEP: creating pods that should always exit 1 and terminating the pod after a random delay Mar 25 12:12:58.930: INFO: watch delete seen for pod-submit-status-0-0 Mar 25 12:12:58.930: INFO: Pod pod-submit-status-0-0 on node latest-worker2 timings total=21.070252019s t=1.08s run=0s execute=0s Mar 25 12:13:04.171: INFO: watch delete seen for pod-submit-status-1-0 Mar 25 12:13:04.171: INFO: Pod pod-submit-status-1-0 on node latest-worker2 timings total=26.311124002s t=1.663s run=0s execute=0s Mar 25 12:14:00.689: INFO: watch delete seen for pod-submit-status-1-1 Mar 25 12:14:00.689: INFO: Pod pod-submit-status-1-1 on node latest-worker2 timings total=56.51784356s t=1.788s run=0s execute=0s Mar 25 12:14:01.005: INFO: watch delete seen for pod-submit-status-0-1 Mar 25 12:14:01.005: INFO: Pod pod-submit-status-0-1 on node latest-worker2 timings total=1m2.074782097s t=1.559s run=0s execute=0s Mar 25 12:14:08.907: INFO: watch delete seen for pod-submit-status-2-0 Mar 25 12:14:08.907: INFO: Pod pod-submit-status-2-0 on node latest-worker2 timings total=1m31.046937998s t=789ms run=0s execute=0s Mar 25 12:15:11.732: INFO: watch delete seen for pod-submit-status-0-2 Mar 25 12:15:11.733: INFO: Pod pod-submit-status-0-2 on node latest-worker2 timings total=1m10.727382271s t=31ms run=0s execute=0s Mar 25 12:15:20.535: INFO: watch delete seen for pod-submit-status-1-2 Mar 25 12:15:20.535: INFO: Pod pod-submit-status-1-2 on node latest-worker2 timings total=1m19.846051577s t=358ms run=0s execute=0s Mar 25 12:15:20.810: INFO: watch delete seen for pod-submit-status-2-1 Mar 25 12:15:20.810: INFO: Pod pod-submit-status-2-1 on node latest-worker timings total=1m11.902978888s t=295ms run=0s execute=0s Mar 25 12:15:26.985: INFO: watch delete seen for pod-submit-status-2-2 Mar 25 12:15:26.985: INFO: Pod pod-submit-status-2-2 on node latest-worker timings total=6.175299394s t=501ms run=0s execute=0s Mar 25 12:16:16.926: INFO: watch delete seen for pod-submit-status-0-3 Mar 25 12:16:16.927: INFO: Pod pod-submit-status-0-3 on node latest-worker timings total=1m5.193960742s t=846ms run=0s execute=0s Mar 25 12:16:16.929: INFO: watch delete seen for pod-submit-status-2-3 Mar 25 12:16:16.929: INFO: Pod pod-submit-status-2-3 on node latest-worker2 timings total=49.94399948s t=801ms run=0s execute=0s Mar 25 12:16:17.656: INFO: watch delete seen for pod-submit-status-1-3 Mar 25 12:16:17.656: INFO: Pod pod-submit-status-1-3 on node latest-worker2 timings total=57.121006138s t=698ms run=0s execute=0s Mar 25 12:16:26.285: INFO: watch delete seen for pod-submit-status-1-4 Mar 25 12:16:26.285: INFO: Pod pod-submit-status-1-4 on node latest-worker2 timings total=8.628513813s t=29ms run=0s execute=0s Mar 25 12:16:36.514: INFO: watch delete seen for pod-submit-status-2-4 Mar 25 12:16:36.514: INFO: Pod pod-submit-status-2-4 on node latest-worker2 timings total=19.584945309s t=1.474s run=0s execute=0s Mar 25 12:16:39.501: INFO: watch delete seen for pod-submit-status-0-4 Mar 25 12:16:39.501: INFO: Pod pod-submit-status-0-4 on node latest-worker2 timings total=22.574071351s t=1.009s run=0s execute=0s Mar 25 12:16:52.900: INFO: watch delete seen for pod-submit-status-2-5 Mar 25 12:16:52.901: INFO: Pod pod-submit-status-2-5 on node latest-worker2 timings total=16.386152991s t=1.83s run=0s execute=0s Mar 25 12:16:55.947: INFO: watch delete seen for pod-submit-status-0-5 Mar 25 12:16:55.947: INFO: Pod pod-submit-status-0-5 on node latest-worker2 timings total=16.446521272s t=1.971s run=0s execute=0s Mar 25 12:16:58.146: INFO: watch delete seen for pod-submit-status-2-6 Mar 25 12:16:58.146: INFO: Pod pod-submit-status-2-6 on node latest-worker2 timings total=5.24561382s t=1.115s run=0s execute=0s Mar 25 12:17:11.289: INFO: watch delete seen for pod-submit-status-0-6 Mar 25 12:17:11.289: INFO: Pod pod-submit-status-0-6 on node latest-worker2 timings total=15.341418499s t=1.824s run=0s execute=0s Mar 25 12:17:16.187: INFO: watch delete seen for pod-submit-status-2-7 Mar 25 12:17:16.187: INFO: Pod pod-submit-status-2-7 on node latest-worker2 timings total=18.040484544s t=1.787s run=0s execute=0s Mar 25 12:17:20.848: INFO: watch delete seen for pod-submit-status-1-5 Mar 25 12:17:20.848: INFO: Pod pod-submit-status-1-5 on node latest-worker2 timings total=54.563055522s t=1.654s run=0s execute=0s Mar 25 12:17:25.313: INFO: watch delete seen for pod-submit-status-1-6 Mar 25 12:17:25.314: INFO: Pod pod-submit-status-1-6 on node latest-worker2 timings total=4.465608924s t=964ms run=0s execute=0s Mar 25 12:17:25.888: INFO: watch delete seen for pod-submit-status-0-7 Mar 25 12:17:25.888: INFO: Pod pod-submit-status-0-7 on node latest-worker timings total=14.598647318s t=976ms run=0s execute=0s Mar 25 12:17:46.553: INFO: watch delete seen for pod-submit-status-1-7 Mar 25 12:17:46.554: INFO: Pod pod-submit-status-1-7 on node latest-worker2 timings total=21.240002936s t=1.796s run=0s execute=0s Mar 25 12:17:56.604: INFO: watch delete seen for pod-submit-status-1-8 Mar 25 12:17:56.604: INFO: Pod pod-submit-status-1-8 on node latest-worker timings total=10.050244588s t=107ms run=0s execute=0s Mar 25 12:18:26.949: INFO: watch delete seen for pod-submit-status-1-9 Mar 25 12:18:26.950: INFO: Pod pod-submit-status-1-9 on node latest-worker timings total=30.345686921s t=535ms run=0s execute=0s Mar 25 12:18:26.950: INFO: watch delete seen for pod-submit-status-0-8 Mar 25 12:18:26.950: INFO: Pod pod-submit-status-0-8 on node latest-worker2 timings total=1m1.062610366s t=301ms run=0s execute=0s Mar 25 12:18:29.339: INFO: watch delete seen for pod-submit-status-2-8 Mar 25 12:18:29.339: INFO: Pod pod-submit-status-2-8 on node latest-worker2 timings total=1m13.151798875s t=1.987s run=0s execute=0s Mar 25 12:18:30.120: INFO: watch delete seen for pod-submit-status-1-10 Mar 25 12:18:30.120: INFO: Pod pod-submit-status-1-10 on node latest-worker timings total=3.169947836s t=378ms run=0s execute=0s Mar 25 12:18:33.511: INFO: watch delete seen for pod-submit-status-2-9 Mar 25 12:18:33.511: INFO: Pod pod-submit-status-2-9 on node latest-worker timings total=4.172062311s t=537ms run=0s execute=0s Mar 25 12:18:37.721: INFO: watch delete seen for pod-submit-status-2-10 Mar 25 12:18:37.721: INFO: Pod pod-submit-status-2-10 on node latest-worker2 timings total=4.210249116s t=304ms run=0s execute=0s Mar 25 12:18:46.930: INFO: watch delete seen for pod-submit-status-0-9 Mar 25 12:18:46.930: INFO: Pod pod-submit-status-0-9 on node latest-worker timings total=19.979752648s t=981ms run=0s execute=0s Mar 25 12:19:25.576: INFO: watch delete seen for pod-submit-status-1-11 Mar 25 12:19:25.576: INFO: Pod pod-submit-status-1-11 on node latest-worker2 timings total=55.456051169s t=1.386s run=0s execute=0s Mar 25 12:19:26.112: INFO: watch delete seen for pod-submit-status-2-11 Mar 25 12:19:26.112: INFO: Pod pod-submit-status-2-11 on node latest-worker2 timings total=48.391032719s t=1.264s run=0s execute=0s Mar 25 12:19:26.566: INFO: watch delete seen for pod-submit-status-0-10 Mar 25 12:19:26.566: INFO: Pod pod-submit-status-0-10 on node latest-worker2 timings total=39.635935326s t=61ms run=0s execute=0s Mar 25 12:19:29.477: INFO: watch delete seen for pod-submit-status-0-11 Mar 25 12:19:29.477: INFO: Pod pod-submit-status-0-11 on node latest-worker2 timings total=2.911179793s t=392ms run=0s execute=0s Mar 25 12:19:47.668: INFO: watch delete seen for pod-submit-status-2-12 Mar 25 12:19:47.669: INFO: Pod pod-submit-status-2-12 on node latest-worker2 timings total=21.556302723s t=1.619s run=0s execute=0s Mar 25 12:19:50.604: INFO: watch delete seen for pod-submit-status-2-13 Mar 25 12:19:50.605: INFO: Pod pod-submit-status-2-13 on node latest-worker2 timings total=2.935936999s t=422ms run=0s execute=0s Mar 25 12:20:26.734: INFO: watch delete seen for pod-submit-status-0-12 Mar 25 12:20:26.734: INFO: Pod pod-submit-status-0-12 on node latest-worker2 timings total=57.257029066s t=1.454s run=0s execute=0s Mar 25 12:20:26.750: INFO: watch delete seen for pod-submit-status-1-12 Mar 25 12:20:26.751: INFO: Pod pod-submit-status-1-12 on node latest-worker timings total=1m1.174715226s t=559ms run=0s execute=0s Mar 25 12:20:28.013: INFO: watch delete seen for pod-submit-status-2-14 Mar 25 12:20:28.013: INFO: Pod pod-submit-status-2-14 on node latest-worker timings total=37.408135823s t=341ms run=0s execute=0s Mar 25 12:20:30.990: INFO: watch delete seen for pod-submit-status-1-13 Mar 25 12:20:30.990: INFO: Pod pod-submit-status-1-13 on node latest-worker2 timings total=4.239467543s t=1.46s run=0s execute=0s Mar 25 12:20:33.747: INFO: watch delete seen for pod-submit-status-0-13 Mar 25 12:20:33.747: INFO: Pod pod-submit-status-0-13 on node latest-worker2 timings total=7.012183556s t=1.819s run=0s execute=0s Mar 25 12:20:49.626: INFO: watch delete seen for pod-submit-status-0-14 Mar 25 12:20:49.626: INFO: Pod pod-submit-status-0-14 on node latest-worker2 timings total=15.879450417s t=663ms run=0s execute=0s Mar 25 12:21:35.621: INFO: watch delete seen for pod-submit-status-1-14 Mar 25 12:21:35.622: INFO: Pod pod-submit-status-1-14 on node latest-worker2 timings total=1m4.631411493s t=359ms run=0s execute=0s [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:21:35.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5425" for this suite. • [SLOW TEST:538.331 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:200 should never report success for a pending container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:206 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container Status should never report success for a pending container","total":58,"completed":36,"skipped":4389,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:21:35.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:21:51.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-1340" for this suite. • [SLOW TEST:15.694 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:68 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":58,"completed":37,"skipped":4464,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:21:51.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:21:56.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6469" for this suite. • [SLOW TEST:6.246 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsNonRoot /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:104 should not run without a specified user ID /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:159 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":58,"completed":38,"skipped":4590,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:21:57.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 STEP: Creating a pod to test downward api env vars Mar 25 12:21:58.872: INFO: Waiting up to 5m0s for pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8" in namespace "downward-api-7092" to be "Succeeded or Failed" Mar 25 12:21:59.562: INFO: Pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8": Phase="Pending", Reason="", readiness=false. Elapsed: 689.702736ms Mar 25 12:22:01.795: INFO: Pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.922985888s Mar 25 12:22:04.132: INFO: Pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.260041891s Mar 25 12:22:06.466: INFO: Pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8": Phase="Running", Reason="", readiness=true. Elapsed: 7.594188335s Mar 25 12:22:08.474: INFO: Pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.601675299s STEP: Saw pod success Mar 25 12:22:08.474: INFO: Pod "downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8" satisfied condition "Succeeded or Failed" Mar 25 12:22:08.845: INFO: Trying to get logs from node latest-worker2 pod downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8 container dapi-container: STEP: delete the pod Mar 25 12:22:10.858: INFO: Waiting for pod downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8 to disappear Mar 25 12:22:10.908: INFO: Pod downward-api-9a1c5c9e-a9fe-4bfa-ae7c-0c0757a377d8 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:22:10.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7092" for this suite. • [SLOW TEST:13.845 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/downwardapi.go:109 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":58,"completed":39,"skipped":4886,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:22:11.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 Mar 25 12:22:14.532: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47" in namespace "security-context-test-128" to be "Succeeded or Failed" Mar 25 12:22:14.895: INFO: Pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47": Phase="Pending", Reason="", readiness=false. Elapsed: 362.79546ms Mar 25 12:22:17.131: INFO: Pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.598793116s Mar 25 12:22:19.934: INFO: Pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47": Phase="Pending", Reason="", readiness=false. Elapsed: 5.401832722s Mar 25 12:22:22.824: INFO: Pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47": Phase="Pending", Reason="", readiness=false. Elapsed: 8.292683501s Mar 25 12:22:25.525: INFO: Pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47": Phase="Failed", Reason="", readiness=false. Elapsed: 10.993109321s Mar 25 12:22:25.525: INFO: Pod "busybox-readonly-true-504dd2be-9fbe-4e33-95c0-ac9c45a73b47" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:22:25.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-128" for this suite. • [SLOW TEST:14.240 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a pod with readOnlyRootFilesystem /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:171 should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:217 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":58,"completed":40,"skipped":4939,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSS ------------------------------ [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:22:25.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:164 Mar 25 12:22:27.811: INFO: Waiting up to 1m0s for all nodes to be ready Mar 25 12:23:27.830: INFO: Waiting for terminating namespaces to be deleted... [It] doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 Mar 25 12:23:28.967: INFO: Starting informer... STEP: Starting pod... Mar 25 12:23:30.883: INFO: Pod is running on latest-worker. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod to be deleted Mar 25 12:24:37.341: INFO: Pod wasn't evicted. Test successful STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:24:38.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-2658" for this suite. • [SLOW TEST:133.291 seconds] [sig-node] NoExecuteTaintManager Single Pod [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 doesn't evict pod with tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:209 ------------------------------ {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes","total":58,"completed":41,"skipped":4946,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:24:39.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 12:24:42.523: INFO: Waiting up to 5m0s for pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df" in namespace "security-context-5094" to be "Succeeded or Failed" Mar 25 12:24:42.825: INFO: Pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df": Phase="Pending", Reason="", readiness=false. Elapsed: 301.788304ms Mar 25 12:24:44.919: INFO: Pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.396035406s Mar 25 12:24:46.999: INFO: Pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df": Phase="Pending", Reason="", readiness=false. Elapsed: 4.47572965s Mar 25 12:24:49.058: INFO: Pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.535087082s Mar 25 12:24:51.101: INFO: Pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.577528565s STEP: Saw pod success Mar 25 12:24:51.101: INFO: Pod "security-context-0d944662-940c-4305-86ff-3e672d1ef1df" satisfied condition "Succeeded or Failed" Mar 25 12:24:51.316: INFO: Trying to get logs from node latest-worker pod security-context-0d944662-940c-4305-86ff-3e672d1ef1df container test-container: STEP: delete the pod Mar 25 12:24:51.695: INFO: Waiting for pod security-context-0d944662-940c-4305-86ff-3e672d1ef1df to disappear Mar 25 12:24:51.783: INFO: Pod security-context-0d944662-940c-4305-86ff-3e672d1ef1df no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:24:51.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-5094" for this suite. • [SLOW TEST:12.962 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp unconfined on the pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:169 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":58,"completed":42,"skipped":4992,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:24:51.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 STEP: create the container STEP: check the container status STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:24:58.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2539" for this suite. • [SLOW TEST:7.666 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when running a container with a new image /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:266 should not be able to pull image from invalid registry [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:377 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":58,"completed":43,"skipped":5012,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Security Context should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:24:59.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod Mar 25 12:25:01.586: INFO: Waiting up to 5m0s for pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1" in namespace "security-context-7161" to be "Succeeded or Failed" Mar 25 12:25:01.799: INFO: Pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 213.237807ms Mar 25 12:25:04.055: INFO: Pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.469251149s Mar 25 12:25:06.155: INFO: Pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.569078456s Mar 25 12:25:09.033: INFO: Pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.447316874s Mar 25 12:25:11.243: INFO: Pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.656965316s STEP: Saw pod success Mar 25 12:25:11.243: INFO: Pod "security-context-6db90c08-bb21-4116-9040-b81a67e85ed1" satisfied condition "Succeeded or Failed" Mar 25 12:25:11.371: INFO: Trying to get logs from node latest-worker pod security-context-6db90c08-bb21-4116-9040-b81a67e85ed1 container test-container: STEP: delete the pod Mar 25 12:25:12.855: INFO: Waiting for pod security-context-6db90c08-bb21-4116-9040-b81a67e85ed1 to disappear Mar 25 12:25:12.999: INFO: Pod security-context-6db90c08-bb21-4116-9040-b81a67e85ed1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:12.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7161" for this suite. • [SLOW TEST:13.498 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should support seccomp runtime/default [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:176 ------------------------------ {"msg":"PASSED [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":58,"completed":44,"skipped":5028,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSS ------------------------------ [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:13.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 Mar 25 12:25:15.184: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c" in namespace "security-context-test-235" to be "Succeeded or Failed" Mar 25 12:25:15.643: INFO: Pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c": Phase="Pending", Reason="", readiness=false. Elapsed: 458.628369ms Mar 25 12:25:17.911: INFO: Pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726278709s Mar 25 12:25:20.019: INFO: Pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.834961023s Mar 25 12:25:22.519: INFO: Pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.334905326s Mar 25 12:25:24.574: INFO: Pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.390161583s Mar 25 12:25:24.575: INFO: Pod "alpine-nnp-true-3ecd0dd0-e204-41a0-8635-f6531e0c553c" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:24.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-235" for this suite. • [SLOW TEST:11.809 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should allow privilege escalation when true [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:367 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":58,"completed":45,"skipped":5037,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:24.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] should have OwnerReferences set /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:88 [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:25:26.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2201" for this suite. •{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":58,"completed":46,"skipped":5130,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Probing container should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:25:26.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:53 [It] should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 STEP: Creating pod startup-34c551d8-925c-4de2-be51-c567bc7aa15b in namespace container-probe-6055 Mar 25 12:25:39.550: INFO: Started pod startup-34c551d8-925c-4de2-be51-c567bc7aa15b in namespace container-probe-6055 STEP: checking the pod's current state and verifying that restartCount is present Mar 25 12:25:39.600: INFO: Initial restart count of pod startup-34c551d8-925c-4de2-be51-c567bc7aa15b is 0 Mar 25 12:27:48.642: INFO: Restart count of pod container-probe-6055/startup-34c551d8-925c-4de2-be51-c567bc7aa15b is now 1 (2m9.041032522s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:27:48.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6055" for this suite. • [SLOW TEST:142.604 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted startup probe fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:289 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted startup probe fails","total":58,"completed":47,"skipped":5242,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:27:48.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187 [It] should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 STEP: submitting the pod to kubernetes STEP: patching pod status with condition "k8s.io/test-condition1" to true STEP: patching pod status with condition "k8s.io/test-condition2" to true STEP: patching pod status with condition "k8s.io/test-condition1" to false [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:08.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3190" for this suite. • [SLOW TEST:20.087 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support pod readiness gates [NodeFeature:PodReadinessGate] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:778 ------------------------------ {"msg":"PASSED [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":58,"completed":48,"skipped":5325,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:09.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:446 [It] should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 STEP: creating the pod that should always exit 0 STEP: submitting the pod to kubernetes Mar 25 12:28:09.404: INFO: Waiting up to 5m0s for pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09" in namespace "pods-6709" to be "Succeeded or Failed" Mar 25 12:28:10.594: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Pending", Reason="", readiness=false. Elapsed: 1.189699991s Mar 25 12:28:12.752: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347523923s Mar 25 12:28:15.516: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111407292s Mar 25 12:28:18.329: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.924819515s Mar 25 12:28:20.537: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Pending", Reason="", readiness=false. Elapsed: 11.133025659s Mar 25 12:28:22.723: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Pending", Reason="", readiness=false. Elapsed: 13.318594034s Mar 25 12:28:24.777: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Running", Reason="", readiness=true. Elapsed: 15.372781013s Mar 25 12:28:27.877: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.47312563s STEP: Saw pod success Mar 25 12:28:27.877: INFO: Pod "pod-always-succeedbfd718c9-e45f-47fa-8aae-59fc0fb94a09" satisfied condition "Succeeded or Failed" STEP: Getting events about the pod STEP: Checking events about the pod STEP: deleting the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:31.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6709" for this suite. • [SLOW TEST:23.356 seconds] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 Pod Container lifecycle /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:444 should not create extra sandbox if all containers are done /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:450 ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done","total":58,"completed":49,"skipped":5474,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:32.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should create and update a lease in the kube-node-lease namespace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:50 STEP: check that lease for this Kubelet exists in the kube-node-lease namespace STEP: check that node lease is updated at least once within the lease duration [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:34.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-8177" for this suite. •{"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":58,"completed":50,"skipped":5522,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 [BeforeEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:34.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename apparmor STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:32 Mar 25 12:28:37.215: INFO: Only supported for node OS distro [gci ubuntu] (not debian) [AfterEach] load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:36 [AfterEach] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:28:37.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "apparmor-1120" for this suite. S [SKIPPING] in Spec Setup (BeforeEach) [2.455 seconds] [sig-node] AppArmor /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 load AppArmor profiles /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31 can disable an AppArmor profile, using unconfined [BeforeEach] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:47 Only supported for node OS distro [gci ubuntu] (not debian) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/skipper/skipper.go:275 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client Mar 25 12:28:37.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename node-lease-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 STEP: wait until node is ready Mar 25 12:28:37.878: INFO: Waiting up to 5m0s for node latest-worker condition Ready to be true STEP: wait until there is node lease STEP: verify NodeStatus report period is longer than lease duration Mar 25 12:28:40.175: INFO: node status heartbeat is unchanged for 1.184403524s, waiting for 1m20s Mar 25 12:28:41.042: INFO: node status heartbeat is unchanged for 2.051669703s, waiting for 1m20s Mar 25 12:28:42.043: INFO: node status heartbeat is unchanged for 3.05223126s, waiting for 1m20s Mar 25 12:28:43.039: INFO: node status heartbeat is unchanged for 4.048333046s, waiting for 1m20s Mar 25 12:28:44.032: INFO: node status heartbeat is unchanged for 5.041633997s, waiting for 1m20s Mar 25 12:28:45.061: INFO: node status heartbeat is unchanged for 6.070872703s, waiting for 1m20s Mar 25 12:28:46.590: INFO: node status heartbeat is unchanged for 7.599671771s, waiting for 1m20s Mar 25 12:28:47.283: INFO: node status heartbeat is unchanged for 8.292400537s, waiting for 1m20s Mar 25 12:28:48.331: INFO: node status heartbeat is unchanged for 9.34031971s, waiting for 1m20s Mar 25 12:28:49.036: INFO: node status heartbeat is unchanged for 10.045879033s, waiting for 1m20s Mar 25 12:28:50.033: INFO: node status heartbeat is unchanged for 11.042740444s, waiting for 1m20s Mar 25 12:28:51.051: INFO: node status heartbeat is unchanged for 12.060583505s, waiting for 1m20s Mar 25 12:28:52.157: INFO: node status heartbeat is unchanged for 13.16641821s, waiting for 1m20s Mar 25 12:28:53.343: INFO: node status heartbeat is unchanged for 14.352772764s, waiting for 1m20s Mar 25 12:28:54.576: INFO: node status heartbeat is unchanged for 15.585288112s, waiting for 1m20s Mar 25 12:28:55.405: INFO: node status heartbeat is unchanged for 16.414464118s, waiting for 1m20s Mar 25 12:28:56.152: INFO: node status heartbeat is unchanged for 17.161672919s, waiting for 1m20s Mar 25 12:28:57.705: INFO: node status heartbeat is unchanged for 18.71465923s, waiting for 1m20s Mar 25 12:28:58.127: INFO: node status heartbeat is unchanged for 19.136532147s, waiting for 1m20s Mar 25 12:29:00.050: INFO: node status heartbeat is unchanged for 21.059782243s, waiting for 1m20s Mar 25 12:29:01.535: INFO: node status heartbeat is unchanged for 22.544612105s, waiting for 1m20s Mar 25 12:29:02.554: INFO: node status heartbeat is unchanged for 23.563342302s, waiting for 1m20s Mar 25 12:29:03.350: INFO: node status heartbeat is unchanged for 24.359207576s, waiting for 1m20s Mar 25 12:29:04.215: INFO: node status heartbeat is unchanged for 25.224717967s, waiting for 1m20s Mar 25 12:29:05.001: INFO: node status heartbeat is unchanged for 26.010157975s, waiting for 1m20s Mar 25 12:29:06.011: INFO: node status heartbeat is unchanged for 27.020343586s, waiting for 1m20s Mar 25 12:29:07.042: INFO: node status heartbeat is unchanged for 28.051255078s, waiting for 1m20s Mar 25 12:29:08.020: INFO: node status heartbeat is unchanged for 29.029778608s, waiting for 1m20s Mar 25 12:29:09.619: INFO: node status heartbeat is unchanged for 30.628147603s, waiting for 1m20s Mar 25 12:29:10.043: INFO: node status heartbeat is unchanged for 31.052151549s, waiting for 1m20s Mar 25 12:29:11.029: INFO: node status heartbeat is unchanged for 32.038698639s, waiting for 1m20s Mar 25 12:29:12.143: INFO: node status heartbeat is unchanged for 33.152751801s, waiting for 1m20s Mar 25 12:29:13.175: INFO: node status heartbeat is unchanged for 34.18449707s, waiting for 1m20s Mar 25 12:29:14.003: INFO: node status heartbeat is unchanged for 35.012675013s, waiting for 1m20s Mar 25 12:29:15.156: INFO: node status heartbeat is unchanged for 36.165758783s, waiting for 1m20s Mar 25 12:29:17.287: INFO: node status heartbeat is unchanged for 38.296254386s, waiting for 1m20s Mar 25 12:29:18.528: INFO: node status heartbeat is unchanged for 39.537475355s, waiting for 1m20s Mar 25 12:29:19.105: INFO: node status heartbeat is unchanged for 40.114441039s, waiting for 1m20s Mar 25 12:29:20.702: INFO: node status heartbeat is unchanged for 41.711577457s, waiting for 1m20s Mar 25 12:29:21.259: INFO: node status heartbeat is unchanged for 42.268475194s, waiting for 1m20s Mar 25 12:29:22.410: INFO: node status heartbeat is unchanged for 43.419802722s, waiting for 1m20s Mar 25 12:29:23.482: INFO: node status heartbeat is unchanged for 44.491346403s, waiting for 1m20s Mar 25 12:29:24.587: INFO: node status heartbeat is unchanged for 45.596553928s, waiting for 1m20s Mar 25 12:29:25.018: INFO: node status heartbeat is unchanged for 46.027932632s, waiting for 1m20s Mar 25 12:29:26.397: INFO: node status heartbeat is unchanged for 47.40622213s, waiting for 1m20s Mar 25 12:29:27.182: INFO: node status heartbeat is unchanged for 48.191635901s, waiting for 1m20s Mar 25 12:29:28.124: INFO: node status heartbeat is unchanged for 49.133645631s, waiting for 1m20s Mar 25 12:29:29.442: INFO: node status heartbeat is unchanged for 50.451458257s, waiting for 1m20s Mar 25 12:29:30.306: INFO: node status heartbeat is unchanged for 51.315816532s, waiting for 1m20s Mar 25 12:29:31.175: INFO: node status heartbeat is unchanged for 52.184904555s, waiting for 1m20s Mar 25 12:29:32.355: INFO: node status heartbeat is unchanged for 53.364129434s, waiting for 1m20s Mar 25 12:29:33.129: INFO: node status heartbeat is unchanged for 54.138150778s, waiting for 1m20s Mar 25 12:29:34.073: INFO: node status heartbeat is unchanged for 55.082797625s, waiting for 1m20s Mar 25 12:29:35.101: INFO: node status heartbeat is unchanged for 56.110362022s, waiting for 1m20s Mar 25 12:29:36.564: INFO: node status heartbeat is unchanged for 57.573849713s, waiting for 1m20s Mar 25 12:29:37.290: INFO: node status heartbeat is unchanged for 58.299450861s, waiting for 1m20s Mar 25 12:29:38.073: INFO: node status heartbeat is unchanged for 59.083024816s, waiting for 1m20s Mar 25 12:29:39.312: INFO: node status heartbeat is unchanged for 1m0.321775988s, waiting for 1m20s Mar 25 12:29:40.272: INFO: node status heartbeat is unchanged for 1m1.28124068s, waiting for 1m20s Mar 25 12:29:41.068: INFO: node status heartbeat is unchanged for 1m2.077278614s, waiting for 1m20s Mar 25 12:29:42.061: INFO: node status heartbeat is unchanged for 1m3.07076973s, waiting for 1m20s Mar 25 12:29:43.577: INFO: node status heartbeat is unchanged for 1m4.586371724s, waiting for 1m20s Mar 25 12:29:44.187: INFO: node status heartbeat is unchanged for 1m5.196637855s, waiting for 1m20s Mar 25 12:29:45.003: INFO: node status heartbeat is unchanged for 1m6.012145055s, waiting for 1m20s Mar 25 12:29:46.064: INFO: node status heartbeat is unchanged for 1m7.073796033s, waiting for 1m20s Mar 25 12:29:47.149: INFO: node status heartbeat is unchanged for 1m8.158811161s, waiting for 1m20s Mar 25 12:29:48.001: INFO: node status heartbeat is unchanged for 1m9.010149879s, waiting for 1m20s Mar 25 12:29:49.154: INFO: node status heartbeat is unchanged for 1m10.16345767s, waiting for 1m20s Mar 25 12:29:49.996: INFO: node status heartbeat is unchanged for 1m11.005804723s, waiting for 1m20s Mar 25 12:29:51.541: INFO: node status heartbeat is unchanged for 1m12.550385579s, waiting for 1m20s Mar 25 12:29:52.597: INFO: node status heartbeat is unchanged for 1m13.606123288s, waiting for 1m20s Mar 25 12:29:53.043: INFO: node status heartbeat is unchanged for 1m14.05215508s, waiting for 1m20s Mar 25 12:29:54.355: INFO: node status heartbeat is unchanged for 1m15.365104725s, waiting for 1m20s Mar 25 12:29:55.333: INFO: node status heartbeat is unchanged for 1m16.342553893s, waiting for 1m20s Mar 25 12:29:56.396: INFO: node status heartbeat is unchanged for 1m17.405928402s, waiting for 1m20s Mar 25 12:29:57.407: INFO: node status heartbeat is unchanged for 1m18.41695342s, waiting for 1m20s Mar 25 12:29:58.055: INFO: node status heartbeat is unchanged for 1m19.065060016s, waiting for 1m20s Mar 25 12:29:59.597: INFO: node status heartbeat is unchanged for 1m20.606515463s, was waiting for at least 1m20s, success! STEP: verify node is still in ready status even though node status report is infrequent [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Mar 25 12:29:59.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "node-lease-test-2688" for this suite. • [SLOW TEST:82.874 seconds] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when the NodeLease feature is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:49 the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 ------------------------------ {"msg":"PASSED [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently","total":58,"completed":51,"skipped":5624,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 25 12:30:00.325: INFO: Running AfterSuite actions on all nodes Mar 25 12:30:00.325: INFO: Running AfterSuite actions on node 1 Mar 25 12:30:00.325: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/sig_node/junit_01.xml {"msg":"Test Suite completed","total":58,"completed":51,"skipped":5684,"failed":2,"failures":["[sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes","[sig-node] Probing container should be ready immediately after startupProbe succeeds"]} Summarizing 2 Failures: [Fail] [sig-node] NoExecuteTaintManager Single Pod [Serial] [It] eventually evict pod with finite tolerations from tainted nodes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-node] Probing container [It] should be ready immediately after startupProbe succeeds /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 Ran 53 of 5737 Specs in 4837.924 seconds FAIL! -- 51 Passed | 2 Failed | 0 Pending | 5684 Skipped --- FAIL: TestE2E (4838.25s) FAIL